doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2401.09350 | 316 | Part III Compression
Chapter 9 Quantization
Abstract In a vector retrieval system, it is usually not enough to process queries as fast as possible. It is equally as important to reduce the size of the index by compressing vectors. Compression, however, must be done in such a way that either decompressing the vectors during retrieval incurs a negli- gible cost, or distances can be computed (approximately) in the compressed domain, rendering it unnecessary to decompress compressed vectors during retrieval. This chapter introduces a class of vector compression algorithms, known as quantization, that is inspired by clustering.
# 9.1 Vector Quantization | 2401.09350#316 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 317 | # 9.1 Vector Quantization
Let us take a step back and present a different mental model of the clustering- based retrieval framework discussed in Chapter 7. At a high level, we band together points that are placed by ζ(·) into cluster i and represent that group by µi, for i â [C]. In the first stage of the search for query q, we take the following conceptual step: First, we compute δ(q, µi) for every i and construct a âtableâ that maps i to δ(q, µi). We next approximate δ(q, u) for every u â X using the resulting table: If u â ζ â1(i), then we look up an estimate of its distance to q from the i-th row of the table. We then identify the â closest distances, and perform a secondary search over the corresponding vectors. | 2401.09350#317 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 318 | This presentation of clustering for top-k retrieval highlights an important fact that does not come across as clearly in our original description of the algorithm: We have made an implicit assumption that δ(q, u) â δ(q, µi) for all u â ζ â1(i). That is why we presume that if a cluster minimizes δ(q, ·), then the points within it are also likely to minimize δ(q, ·). That is, in turn, why we deem it sufficient to search over the points within the top-â clusters. Put differently, within the first stage of search, we appear to be approx- imating every point u â ζ â1(i) with Ëu = µi. Because there are C discrete choices to consider for every data point, we can say that we quantize the
127
128
9 Quantization
vectors into [C]. Consequently, we can encode each vector using only log2 C bits, and an entire collection of vectors using m log2 C bits! All together, we can represent a collection X using O(Cd + m log2 C) space, and compute distances to a query by performing m look-ups into a table that itself takes O(Cd) time to construct. That quantity can be far smaller than O(md) given by the na¨ıve distance computation algorithm. | 2401.09350#318 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 319 | Clearly, the approximation error, â¥uâ Ëuâ¥, is a function of C. As we increase C, this approximation improves, so that â¥u â Ëu⥠â 0 and |δ(q, u) â δ(q, Ëu)| â 0. Indeed, C = m implies that Ëu = u for every u. But increasing C results in an increased space complexity and a less efficient distance computation. At C = m, for example, our table-building exercise does not help speed up distance computation for individual data pointsâbecause we must construct the table in O(md) time anyway. Finding the right C is therefore critical to space- and time-complexity, as well as the approximation or quantization error.
# 9.1.1 Codebooks and Codewords | 2401.09350#319 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 320 | # 9.1.1 Codebooks and Codewords
What we described above is known as vector quantization [Gray and Neuhoff, 1998] for vectors in the L2 space. We will therefore assume that δ(u, v) = 2 in the remainder of this section. The function ζ : Rd â [C] is called â¥u â vâ¥2 a quantizer, the individual centroids are referred to as codewords, and the set of C codewords make up a codebook. It is easy to see that the set ζ â1(i) is the intersection of X with the Voronoi region associated with codeword µi. The approximation quality of a given codebook is measured by the famil- iar mean squared error: E[â¥ÂµÎ¶(U ) â U â¥2 2)], with U denoting a random vector. Interestingly, that is exactly the objective that is minimized by Lloydâs al- gorithm for KMeans clustering. As such, an optimal codebook is one that satisfies Lloydâs optimality conditions: each data point must be quantized to its nearest codeword, and each Voronoi region must be represented by its mean. That is why KMeans is our default choice for ζ.
# 9.2 Product Quantization | 2401.09350#320 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 321 | # 9.2 Product Quantization
As we noted earlier, the quantization error is a function of the number of clusters, C: A larger value of C drives down the approximation error, making the quantization and the subsequent top-k retrieval solution more accurate and effective. However, realistically, C cannot become too large, because then the framework would collapse to exhaustive search, degrading its efficiency. How may we reconcile the two seemingly opposing forces?
9.2 Product Quantization
J´egou et al. [2011] gave an answer to that question in the form of Product Quantization (PQ). The idea is easy to describe at a high level: Whereas in vector quantization we quantize the entire vector into one of C clusters, in PQ we break up a vector into orthogonal subspaces and perform vector quantization on individual chunks separately. The quantized vector is then a concatenation of the quantized subspaces. | 2401.09350#321 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 322 | Formally, suppose that the number of dimensions d is divisible by dâ¦, and let L = d/dâ¦. Define a selector matrix Si â {0, 1}dâ¦Ãd, 1 ⤠i ⤠L as a matrix with L blocks in {0, 1}dâ¦Ãd⦠, where all blocks are 0 but the i-th block is the identity. The following is an example for d = 6, d⦠= 2, and i = 2:
g, â 001000 21000100
For a given vector u ⬠R¢, S;u gives the i-th d.-dimensional subspace, so that we can write: u = @, Su. Suppose further that we have n quantizers G through ¢;, where ¢; : R¢° â [C] maps the subspace selected by S$; to one of C clusters. Each ¢; gives us C centroids ju;,; for 7 ⬠[C]. | 2401.09350#322 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 323 | Using the notation above, we can express the PQ code for a vector u as L cluster identifiers, ζi(Siu), for i â [L]. We can therefore quantize a d- dimensional vector using L log2 C bits. Observe that, when L = 1 (or equiva- lently, d⦠= d), PQ reduces to vector quantization. When L = d, on the other hand, PQ performs scalar quantization per dimension.
Given this scheme, our approximation of u is &@ = Dj
i µi,ζi(u). It is easy to see that the quantization error E[â¥U â ËU â¥2 2], with U denoting a random vector drawn from X and ËU its reconstruction, is the sum of the quantization error of individual subspaces:
L EIIV â 813) = =F [lu- Brrccoold] i=l nex L 1 / . = > [dolsiw - H¢5(0) 3] uEX i=1
As a result, learning the L codebooks can be formulated as L independent sub-problems. The i-th codebook can therefore be learnt by the application of KMeans on SiX = {Siu | u â X }.
# 9.2.1 Distance Computation with PQ | 2401.09350#323 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 324 | # 9.2.1 Distance Computation with PQ
In vector quantization, computing the distance of a vector u to a query q was fairly trivial. All we had to do was to precompute a table that maps i â [C] to â¥q â µiâ¥2, then look up the entry that corresponds to ζ(u). The fact that
129
130
9 Quantization
we were able to precompute C distances once per query, then simply look up the right entry from the table for a vector u helped us save a great deal of computation. Can we devise a similar algorithm given a PQ code?
The answer is yes. Indeed, that is why PQ has proven to be an efficient algorithm for distance computation. As in vector quantization, it first com- putes L distance tables, but the i-th table maps j â [C] to â¥Siq â µi,jâ¥2 2 (note the squared L2 distance). Using these tables, we can estimate the distance between q and any vector u as follows: | 2401.09350#324 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 325 | lla â ula = lla â al L = Na - Deicwyll i=1 (Sia ~ Hic) IB ae i=l L = Dillsia = Hie(u)lldObserve that, we have already computed the summands and recorded them in the distance tables. As a result, approximating the distance between u and q amounts to L table look-ups. The overall amount of computation needed to approximate distances between q and m vectors in X is then O(LCd⦠+ mL).
We must remark on the newly-introduced parameter dâ¦. Even though in the context of vector quantization, the impact of C on the quantization error is not theoretically known, there is nonetheless a clear interpretation: A larger C leads to better quantization. In PQ, the impact of d⦠or, equivalently, L on the quantization error is not as clear. As noted earlier, we can say something about d⦠at the extremes, but what we should expect from a value somewhere between 1 and d is largely an empirical question [Sun et al., 2023].
# 9.2.2 Optimized Product Quantization | 2401.09350#325 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 326 | # 9.2.2 Optimized Product Quantization
In PQ, we allocate an equal number of bits (log2 C) to each of the n orthog- onal subspaces. This makes sense if our vectors have similar energy in every subspace. But when the dimensions in one subspace are highly correlated, and in another uncorrolated, our equal-bits-per-subspace allocation policy proves wasteful in the former and perhaps inadequate in the latter. How can we ensure a more balanced energy across subspaces?
J´egou et al. [2011] argue that applying a random rotation R â RdÃd (RRT = I) to the data points prior to quantization is one way to reduce the correlation between dimensions. The matrix R together with Siâs, as
9.2 Product Quantization
defined above, determines how we decompose the vector space into its sub- spaces. By applying a rotation first, we no longer chunk up an input vector into sub-vectors that comprise of consecutive dimensions. | 2401.09350#326 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 327 | Later, Ge et al. [2014] and Norouzi and Fleet [2013] extended this idea and suggested that the matrix R can be learnt jointly with the codebooks. This can be done through an iterative algorithm that switches between two steps in each iteration. In the first step, we freeze R and learn a PQ codebook as before. In the second step, we freeze the codebook and update the matrix R by solving the following optimization problem:
min R â¥Ru â Ëuâ¥2 2, uâX s.t. RRT = I,
where Ëu is the approximation of u according to the frozen PQ codebook. Because u and Ëu are fixed in the above optimization problem, we can rewrite the objective as follows:
min R s.t. RRT = I,
where U is a d-by-m matrix where each column is a vector in X , ËU is a matrix where each column is an approximation of the corresponding column in U , and â¥Â·â¥F is the Frobenius norm. This problem has a closed-form solution as shown by Ge et al. [2014].
# 9.2.3 Extensions | 2401.09350#327 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 328 | # 9.2.3 Extensions
Since the study by J´egou et al. [2011], many variations of the idea have emerged in the literature. In the original publication, for example, J´egou et al. [2011] used PQ codes in conjunction with the clustering-based retrieval framework presented earlier in this chapter. In other words, a collection X is first clustered into C clusters (âcoarse-quantizationâ), and each cluster is subsequently represented using its own PQ codebook. In this way, when the routing function identifies a cluster to search, we can compute distances for data points within that cluster using their PQ codes. Later, Babenko and Lempitsky [2012] extended this two-level quantization further by introducing the âinverted multi-indexâ structure.
When combining PQ with clustering or coarse-quantization, instead of producing PQ codebooks for raw vectors within each cluster, one could learn codebooks for the residual vectors instead. That means, if the centroid of the i-th cluster is µi, then we may quantize (u â µi) for each vector u â ζ â1(i).
131
132
9 Quantization | 2401.09350#328 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 329 | 131
132
9 Quantization
This was the idea first introduced by J´egou et al. [2011], then developed further in subsequent works [Kalantidis and Avrithis, 2014, Wu et al., 2017]. The PQ literature does not end there. In fact, so popular, effective, and efficient is PQ that it pops up in many different contexts and a variety of applications. Research into improving its accuracy and speed is still ongoing. For example, there have been many works that speed up the distance com- putation with PQ codebooks by leveraging hardware capabilities [Johnson et al., 2021, Andre et al., 2021, Andr´e et al., 2015]. Others that extend the algorithm to streaming (online) collections [Xu et al., 2018], and yet other studies that investigate other PQ codebook-learning protocols [Liu et al., 2020, Yu et al., 2018, Chen et al., 2020, Jang and Cho, 2021, Klein and Wolf, 2019, Lu et al., 2023]. This list is certainly not exhaustive and is still growing.
# 9.3 Additive Quantization | 2401.09350#329 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 330 | # 9.3 Additive Quantization
PQ remains the dominant quantization method for top-k retrieval due to its overall simplicity and the efficiency of its codebook learning protocol. There are, however, numerous generalizations of the framework [Babenko and Lempitsky, 2014, Chen et al., 2010, Niu et al., 2023, Liu et al., 2015, Ozan et al., 2016, Krishnan and Liberty, 2021]. Typically, these generalized forms improve the approximation error but require more involved codebook learning algorithms and vector encoding protocols. In this section, we review one key algorithm, known as Additive Quantization (AQ) [Babenko and Lempitsky, 2014], that is the backbone of all other methods.
Like PQ, AQ learns L codebooks where each codebook consists of C code- words. Unlike PQ, however, each codeword is a vector in R¢ârather than R*. Furthermore, a vector u is approximated as the sum, instead of the concatenation, of L codewords, one from each codebook: & = ean Hi,ci(u)s where ¢; : R¢ â [C] is the quantizer associated with the i-th codebook. | 2401.09350#330 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 331 | Let us compare AQ with PQ at a high level and understand how AQ is different. We can still encode a data point using L log2 C bits, as in PQ. How- ever, the codebooks for AQ are L-times larger than their PQ counterparts, simply because each codeword has d dimensions instead of dâ¦. On the other hand, AQ does not decompose the space into orthogonal subspaces and, as such, makes no assumptions about the independence between subspaces.
AQ is therefore a strictly more general quantization method than PQ. In fact, the class of additive quantizers contains the class of product quantizers: By restricting the i-th codebook in AQ to the set of codewords that are 0 everywhere outside of the i-th âchunk,â we recover PQ. Empirical compar- isons [Babenko and Lempitsky, 2014, Matsui et al., 2018] confirm that such a generalization is more effective in practice.
For this formulation to be complete, we have to specify how the codebooks are learnt, how we encode an arbitrary vector, and how we perform distance
9.3 Additive Quantization
computation. We will cover these topics in reverse order in the following sections.
# 9.3.1 Distance Computation with AQ | 2401.09350#331 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 332 | 9.3 Additive Quantization
computation. We will cover these topics in reverse order in the following sections.
# 9.3.1 Distance Computation with AQ
Suppose for the moment that we have learnt AQ codebooks for a collection X and that we are able to encode an arbitrary vector into an AQ code (i.e., a vector of L codeword identifiers). In this section, we examine how we may compute the distance between a query point q and a data point u using its approximation Ëu.
Observe the following fact:
â¥q â uâ¥2 2 = â¥qâ¥2 2 â 2â¨q, uâ© + â¥uâ¥2 2.
The first term is a constant that can be computed once per query and, at any rate, is inconsequential to the top-k retrieval problem. The last term, â¥uâ¥2 2 can be stored for every vector and looked up during distance computation, as suggested by Babenko and Lempitsky [2014]. That means, the encoding of a vector u â X comprises of two components: Ëu and its (possibly scalar- quantized) squared norm. This brings the total space required to encode m vectors to O(LCd + m(1 + L log2 C)). | 2401.09350#332 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 333 | The middle term can be approximated by â¨q, Ëuâ© and can be expressed as follows:
â¨q, uâ© â â¨q, Ëuâ© = â¨q, µi,ζi(u)â©.
As in PQ, the summands can be computed once for all codewords, and stored in a table. When approximating the inner product, we can do as before and look up the appropriate entries from these precomputed tables. The time complexity of this operation is therefore O(LCd + mL) for m data points, which is similar to PQ.
# 9.3.2 AQ Encoding and Codebook Learning
While distance computation with AQ codes is fairly similar to the process involving PQ codes, the encoding of a data point is substantially different and relatively complex in AQ. That is because we can no longer simply assign a vector to its nearest codeword. Instead, we must find an arrangement of L codewords that together minimize the approximation error â¥u â Ëuâ¥2.
Let us expand the expression for the approximation error as follows:
133
134
9 Quantization | 2401.09350#333 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 334 | Let us expand the expression for the approximation error as follows:
133
134
9 Quantization
L ju â GI)3 = lu â SP pircacuylld i=l Nw L L = llulls = 20, 7 piceeuy) + IDE Hncccny | i=l i=1 L = [ul + (D3 â2¢u, wiccscuy) + llccsc 1 i= SE Hiccups Hi.g(0))- 1<i<j<L N 3)+
Notice that the first term is irrelevant to the objective function, so we may ignore it. We must therefore find ζiâs that minimize the remaining terms.
Babenko and Lempitsky [2014] use a generalized Beam search to solve this optimization problem. The algorithm begins by selecting L closest code- words from Uy fei ...fi,c} to u. For a chosen codeword juy,;, we com- pute the residual u â pz,; and find the L closest codewords to it from Uien (Hi ..-fi,c}. After performing this search for all chosen codewords from the first round, we end up with a maximum of L? unique pairs of code- words. Note that, each pair has codewords from two different codebooks. | 2401.09350#334 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 335 | Of the L2 pairs, the algorithm picks the top L that minimize the approx- imation error. It then repeats this process for a total of L rounds, where in each round we compute the residuals given L tuples of codewords, and for each tuple, find L codewords from the remaining codebooks, and ultimately identify the top L tuples from the L2 tuples. At the end of the L-th round, the tuple with the minimal approximation error is the encoding for u.
Now that we have addressed the vector encoding part, it remains to de- scribe the codebook learning procedure. Unsurprisingly, learning a codebook is not so dissimilar to the PQ codebook learning algorithm. It is an iterative procedure alternating between two steps to optimize the following objective:
L min S> ju â SO pi.cicwy l- i=l
One step of every iteration freezes the codewords and performs assignments ζiâs, which is the encoding problem we have already discussed above. The second step freezes the assignments and updates the codewords, which itself is a least-squares problem that can be solved relatively efficiently, considering that it decomposes over each dimension.
9.4 Quantization for Inner Product
# 9.4 Quantization for Inner Product | 2401.09350#335 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 336 | 9.4 Quantization for Inner Product
# 9.4 Quantization for Inner Product
The vector quantization literature has largely been focused on the Euclidean distance and the approximate nearest neighbor search problem. Those ideas typically port over to the maximum cosine similarity search with little effort, but not to MIPS under general conditions. To understand why, suppose we wish to find a quantizer such that the inner product approximation error is minimized for a query distribution: E p> (a, u) â (4, a)â | = DD E (a, ur i?)
E p> (a, u) â (4, a)â | = DD E (a, ur i?) UucxX ue =SCE [(w â a)" qq" (uâ i) qd uEx = Di (u- TE [aa"] (uâ 4%), (9.1) uEx | 2401.09350#336 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 337 | where Ëu is an approximation of u. If we assumed that q is isotropic, so that its covariance matrix is the identity matrix scaled by some constant, then the objective above reduces to the reconstruction error. In that particular case, it makes sense for the quantization objective to be based on the reconstruction error, making the quantization methods we have studied thus far appropriate for MIPS too. But in the more general case, where the distribution of q is anisotropic, there is a gap between the true objective and the reconstruction error.
Guo et al. [2016] showed that, if we are able to obtain a small sample of queries to estimate E[qqT ], then we can modify the assignment step in Lloydâs iterative algorithm for KMeans in order to minimize the objective in Equa- tion (9.1). That is, instead of assigning points to clusters by their Euclidean distance to the (frozen) centroids, we must instead use Mahalanobis distance characterized by E[qqT ]. The resulting quantizer is arguably more suitable for inner product than the plain reconstruction error.
# 9.4.1 Score-aware Quantization | 2401.09350#337 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 338 | # 9.4.1 Score-aware Quantization
Later, Guo et al. [2020] argued that the objective in Equation (9.1) does not adequately capture the nuances of MIPS. Their argument rests on an observation and an intuition. The observation is that, in Equation (9.1), every single data point contributes equally to the optimization objective. Intuitively, however, data points are not equally likely to be the solution to MIPS. The error from data points that are more likely to be the maximizers of inner product with queries should therefore be weighted more heavily than others.
135
136
9 Quantization | 2401.09350#338 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 339 | Fig. 9.1: Decomposition of the residual error r(u, Ëu) = u â Ëu for u â R2 to one component that is parallel to the data point, râ¥(u, Ëu), and another that is orthogonal to it, râ¥(u, Ëu).
On the basis of that argument, Guo et al. [2020] introduce the following objective for inner product quantization:
DE [oU(a.w)) (uw) . (9.2)
In the above, Ï : R â R+ is an arbitrary weight function that determines the importance of each data point to the optimization objective. Ideally, then, Ï should be monotonically non-decreasing in its argument. One such weight function is Ï(s) = 1sâ¥Î¸ for some threshold θ, implying that only data points whose expected inner product is at least θ contribute to the objective, while the rest are simply ignored. That is the weight function that Guo et al. [2020] choose in their work. | 2401.09350#339 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 340 | Something interesting emerges from Equation (9.2) with the choice of Ï(s) = 1sâ¥Î¸: It is more important for Ëu to preserve the norm of u than it is to preserve its angle. We will show why that is shortly, but consider for the moment the reason this behavior is important for MIPS. Suppose there is a data point whose norm is much larger than the rest of the data points. Intuitively, such a data point has a good chance of maximizing inner prod- uct with a query even if its angle with the query is relatively large. In other words, being a candidate solution to MIPS is less sensitive to angles and more sensitive to norms. Of course, as norms become more and more concentrated, angles take on a bigger role in determining the solution to MIPS. So, intu- itively, an objective that penalizes the distortion of norms more than angles is more suitable for MIPS.
# 9.4.1.1 Parallel and Orthogonal Residuals
Let us present this phenomenon more formally and show why the statement above is true. Define the residual error as r(u, Ëu) = u â Ëu. The residual error
9.4 Quantization for Inner Product | 2401.09350#340 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 341 | Fig. 9.2: The probability that the angle between a fixed data point u with a unit-normed query q that is drawn from a spherically-symmetric distribution is at most θ, is equal to the surface area of the spherical cap with base radius a = sin θ. This fact is used in the proof of Theorem 9.1.
can be decomposed into two components: one that is parallel to the data point, râ¥(u, Ëu), and another that is orthogonal to it, râ¥(u, Ëu), as depicted in Figure 9.1. More concretely:
râ¥(u, Ëu) = â¨u â Ëu, uâ© â¥uâ¥2 u,
and,
râ¥(u, Ëu) = r(u, Ëu) â râ¥(u, Ëu).
Guo et al. [2020] show first that, regardless of the choice of Ï, the loss defined by â(u, Ëu, Ï) in Equation (9.2) can be decomposed as stated in the following theorem. | 2401.09350#341 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 342 | Theorem 9.1 Given a data point u, its approximation Ëu, and any weight function Ï, the objective of Equation (9.2) can be decomposed as follows for a spherically-symmetric query distribution:
â(u, Ëu, Ï) â hâ¥(Ï, â¥uâ¥) â¥râ¥(u, Ëu)â¥2 + hâ¥(Ï, â¥uâ¥) â¥râ¥(u, Ëu)â¥2,
where,
hy(w,t) = [ w(t cos 6) ( sinâ? 6 â sin@ 6) dé,
and,
7 1 hi (w,t) = qi w(tcos 6) sin? 6 dd. â1 Jo
Proof. Without loss of generality, we can assume that queries are unit vectors (i.e., â¥q⥠= 1). Let us write â(u, Ëu, Ï) as follows:
137
138
9 Quantization
~ _ ~ (u,t,w) = B [w((a,u)) (au â a?) q = [eu cos) [fou a)2|(4,u) = [ul 054] dP [By < 0), 0 q | 2401.09350#342 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 343 | where θq,u denotes the angle between q and u. θq,u ⤠θ
# Observe that P
is the surface area of a spherical cap with base radius a = â¥q⥠sin θ = sin θâsee Figure 9.2. That quantity is equal to:
â¥qâ¥dâ1 Ïd/2 Î (d/2) I(a2; d â 1 2 , 1 2 ),
where Πis the Gamma function and I(z; ·, ·) is the incomplete Beta function. We may therefore write:
ak [a s 4| x [aa _ @)3 (2) da d6 dé a dâ3 == (2 sin 6 cos 0) cos x sin?-? 6,
where in the first step we used the fact that dI(z; s, t) = (1 â z)tâ1zsâ1dz. Putting everything together, we can rewrite the loss as follows:
L(u, tw) x [ w(||u|| cos 0) E [(a. uât)?| (q,u) = |ful| cos 6] sinâ? 6 dé. 0 q
We can complete the proof by applying the following lemma to the expecta- tion over queries in the integral above.
# Lemma 9.1 | 2401.09350#343 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 344 | We can complete the proof by applying the following lemma to the expecta- tion over queries in the integral above.
# Lemma 9.1
2 _p 2 Bllauâa)?|(a.0) = = anu ap? + RAE Ge (ap
Proof. We use the shorthand r⥠= râ¥(u, Ëu) and similarly r⥠= râ¥(u, Ëu). Decompose q = q⥠+ q⥠where q⥠= â¨q, uâ© u â¥uâ¥2 and q⥠= q â qâ¥. We can now write:
B[(ag.u â a)°|(q.u) = #] = Bl(qy-ry)*I|(a-u) = 4] + Bllaars)I\ (au) = 4h
All other terms are equal to 0 either due to orthogonality or components or because of spherical symmetry. The first term is simply equal to â¥râ¥â¥2 t2 â¥uâ¥2 . By spherical symmetry, it is easy to show that the second term reduces to 1ât2/â¥uâ¥2 ââ dâ1
9.4 Quantization for Inner Product
Applying the lemma above to the integral, we obtain: | 2401.09350#344 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 345 | 9.4 Quantization for Inner Product
Applying the lemma above to the integral, we obtain:
sin? 0 d-1 C(u, tw) | w(||ul] cos 0)( on? A||ry (uw, @)|/?+ irs) sinâ? 6 dé, 0
as desired.
When Ï(s) = 1sâ¥Î¸ for some θ, Guo et al. [2020] show that h⥠outweighs hâ¥, as the following theorem states. This implies that such an Ï puts more emphasis on preserving the parallel residual error as discussed earlier. Theorem 9.2 For Ï(s) = 1sâ¥Î¸ with θ ⥠0, hâ¥(Ï, t) ⥠hâ¥(Ï, t), with equality if and only if Ï is constant over the interval [ât, t].
Proof. We can safely assume that h⥠and h⥠are positive; they are 0 if and only if Ï(s) = 0 over [ât, t]. We can thus express the ratio between them as follows: | 2401.09350#345 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 346 | hy (w, t) _ Jo w(t cos @) sin*? 6 dé yy (ta-2 | h1(w,t) (d of Je w(tcos 6) sinâ 6 dé ' (d H( Ia 1),
where we denoted by Ig = Io
0 Ï(t cos θ) sind θ dθ. Using integration by parts:
where we denoted by Ig = Io w(tcos 4) sinâ @ d@. Using integration by parts:
Tq = âw(tcos 0) cos @ sin! 6) + [ cos 6 G cos @)(d â 1) sinâ? 6 cos 6 â w(t cos @)t sin? 6] dé =(d-1) [ vtteose) cosâ @ sinâ? 6 do â t [ a'(eeos0) cos 6 sin? @ dO = (dâ 1)Ig_2 â (d-1)Ia â â[ w'(tcos 0) cos @ sinâ 6 dO. | 2401.09350#346 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 347 | Because Ï(s) = 0 for s < 0, the last term reduces to an integral over [0, Ï/2]. The resulting integral is non-negative because sine and cosine are both non- negative over that interval. It is 0 if and only if Ïâ² = 0, or equivalently when Ï is constant. We have therefore shown that:
Ta-2 hy (w,t) Ty < (d=1)la-2 â (dâ IL. d-1)(-1) 21 >I, a < (dâ1)Ia-2 â (d- 1) (1 1)(â- 2 hiqw.t) =P
with equality when Ï is constant, as desired.
# 9.4.1.2 Learning a Codebook
The results above formalize the intuition that the parallel residual plays a more important role in quantization for MIPS. If we were to plug the for139
ââ
ââ
140
9 Quantization
malism above into the objective in Equation (9.2) and optimize it to learn a codebook, we would need to compute h⥠and h⥠using Theorem 9.1. That would prove cumbersome indeed. | 2401.09350#347 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 348 | Instead, Guo et al. [2020] show that Ï(s) = 1sâ¥Î¸ results in a more computationally-efficient optimization problem. Letting η(t) = hâ¥(Ï,t) hâ¥(Ï,t) , they show that η/(d â 1) concentrates around (θ/t)2 1â(θ/t)2 as d becomes larger. So in high dimensions, one can rewrite the objective function of Equation (9.2) as follows:
_ Of rally? ri(u.a)||? r) (u.a)|\? DT Grape IP + rn aIP.
Guo et al. [2020] present an optimization procedure that is based on Lloydâs iterative algorithm for KMeans, and use it to learn a codebook by minimizing the objective above. Empirically, such a codebook outperforms the one that is learnt by optimizing the reconstruction error.
# 9.4.1.3 Extensions | 2401.09350#348 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 349 | # 9.4.1.3 Extensions
The score-aware quantization loss has, since its publication, been extended in two different ways. Zhang et al. [2022] adapted the objective function to an Additive Quantization form. Zhang et al. [2023] updated the weight function Ï(·) so that the importance of a data point can be estimated based on a given set of training queries. Both extensions lead to substantial improvements on benchmark datasets.
# References
F. Andr´e, A.-M. Kermarrec, and N. Le Scouarnec. Cache locality is not enough: High-performance nearest neighbor search with product quanti- zation fast scan. Proceedings of the VLDB Endowment, 9(4):288â299, 12 2015.
F. Andre, A.-M. Kermarrec, and N. Le Scouarnec. Quicker adc: Unlocking the hidden potential of product quantization with simd. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(5):1666â1677, 5 2021. A. Babenko and V. Lempitsky. The inverted multi-index. In 2012 IEEE Con- ference on Computer Vision and Pattern Recognition, pages 3069â3076, 2012. | 2401.09350#349 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 350 | A. Babenko and V. Lempitsky. Additive quantization for extreme vector compression. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 931â938, 2014.
# References
References
T. Chen, L. Li, and Y. Sun. Differentiable product quantization for end- to-end embedding compression. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1617â1626, 7 2020.
Y. Chen, T. Guan, and C. Wang. Approximate nearest neighbor search by residual vector quantization. Sensors, 10(12):11259â11273, 2010.
T. Ge, K. He, Q. Ke, and J. Sun. Optimized product quantization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(4):744â755, 2014.
R. Gray and D. Neuhoff. Quantization. IEEE Transactions on Information Theory, 44(6):2325â2383, 1998.
R. Guo, S. Kumar, K. Choromanski, and D. Simcha. Quantization based fast inner product search. In Proceedings of the 19th International Confer- ence on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pages 482â490, Cadiz, Spain, 5 2016. | 2401.09350#350 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 351 | R. Guo, P. Sun, E. Lindgren, Q. Geng, D. Simcha, F. Chern, and S. Kumar. Accelerating large-scale inference with anisotropic vector quantization. In Proceedings of the 37th International Conference on Machine Learning, 2020.
Y. K. Jang and N. I. Cho. Self-supervised product quantization for deep un- supervised image retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12085â12094, October 2021.
H. J´egou, M. Douze, and C. Schmid. Product quantization for nearest neigh- bor search. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 33(1):117â128, 2011.
J. Johnson, M. Douze, and H. J´egou. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 7(3):535â547, 2021.
Y. Kalantidis and Y. Avrithis. Locally optimized product quantization for approximate nearest neighbor search. In 2014 IEEE Conference on Com- puter Vision and Pattern Recognition, pages 2329â2336, 2014. | 2401.09350#351 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 352 | B. Klein and L. Wolf. End-to-end supervised product quantization for im- age search and retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6 2019.
A. Krishnan and E. Liberty. Projective clustering product quantization, 2021. M. Liu, Y. Dai, Y. Bai, and L.-Y. Duan. Deep product quantization module for efficient image retrieval. In IEEE International Conference on Acous- tics, Speech and Signal Processing, pages 4382â4386, 2020.
S. Liu, H. Lu, and J. Shao. Improved residual vector quantization for high- dimensional approximate nearest neighbor search, 2015.
Z. Lu, D. Lian, J. Zhang, Z. Zhang, C. Feng, H. Wang, and E. Chen. Dif- ferentiable optimized product quantization and beyond. In Proceedings of the ACM Web Conference 2023, pages 3353â3363, 2023.
Y. Matsui, Y. Uchida, H. J´egou, and S. Satoh. A survey of product quan- tization. ITE Transactions on Media Technology and Applications, 6(1): 2â10, 2018.
141
142
9 Quantization | 2401.09350#352 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 353 | 141
142
9 Quantization
L. Niu, Z. Xu, L. Zhao, D. He, J. Ji, X. Yuan, and M. Xue. Residual vec- tor product quantization for approximate nearest neighbor search. Expert Systems with Applications, 232(C), 12 2023.
M. Norouzi and D. J. Fleet. Cartesian k-means. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, pages 3017â3024, 2013.
E. C. Ozan, S. Kiranyaz, and M. Gabbouj. Competitive quantization for approximate nearest neighbor search. IEEE Transactions on Knowledge and Data Engineering, 28(11):2884â2894, 2016.
P. Sun, R. Guo, and S. Kumar. Automating nearest neighbor search config- uration with constrained optimization. In Proceedings of the 11th Interna- tional Conference on Learning Representations, 2023.
X. Wu, R. Guo, A. T. Suresh, S. Kumar, D. N. Holtmann-Rice, D. Simcha, and F. Yu. Multiscale quantization for fast similarity search. In Advances in Neural Information Processing Systems, volume 30, 2017. | 2401.09350#353 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 354 | D. Xu, I. W. Tsang, and Y. Zhang. Online product quantization. IEEE Trans- actions on Knowledge and Data Engineering, 30(11):2185â2198, 2018. T. Yu, J. Yuan, C. Fang, and H. Jin. Product quantization network for fast image retrieval. In Proceedings of the European Conference on Computer Vision, September 2018.
J. Zhang, Q. Liu, D. Lian, Z. Liu, L. Wu, and E. Chen. Anisotropic addi- tive quantization for fast inner product search. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4):4354â4362, 6 2022.
J. Zhang, D. Lian, H. Zhang, B. Wang, and E. Chen. Query-aware quantiza- tion for maximum inner product search. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, 2023.
# Chapter 10 Sketching | 2401.09350#354 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 355 | # Chapter 10 Sketching
Abstract Sketching is a probabilistic tool to summarize high-dimensional vectors into low-dimensional vectors, called sketches, while approximately preserving properties of interest. For example, we may sketch vectors in the Euclidean space such that their L2 norm is approximately preserved; or sketch points in an inner product space such that the inner product between any two points is maintained with high probability. This chapter reviews a few data-oblivious algorithms, cherry-picked from the vast literature on sketching, that are tailored to sparse vectors in an inner product space.
# 10.1 Intuition
We learnt about quantization as a form of vector compression in Chapter 9. There, vectors are decomposed into L subspaces, with each subspace mapped to C geometrically-cohesive buckets. By coding each subspace into only C values, we can encode an entire vector in L log C bits, often dramatically re- ducing the size of a vector collection, though at the cost of losing information in the process.
The challenge, we also learnt, is that not enough can be said about the effects of L, C, and other parameters involved in the process of quantization, on the reconstruction error. We can certainly intuit the asymptotic behavior of quantization, but that is neither interesting nor insightful. That leaves us no option other than settling on a configuration empirically. | 2401.09350#355 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 356 | Additionally, learning codebooks can become involved and cumbersome. It involves tuning parameters and running clustering algorithms, whose ex- pected behavior is itself ill-understood when handling improper distance func- tions. The resulting codebooks too may become obsolete in the event of a distributional shift.
143
144
10 Sketching
This chapter reviews a different class of compression techniques known as data-oblivious sketching. Let us break down this phrase and understand each part better.
The data-oblivious qualifier is rather self-explanatory: We make no as- sumptions about the input data, and in fact, do not even take advantage of the statistical properties of the data. We are, in other words, completely agnostic and oblivious to our input.
While oblivion may put us at a disadvantage and lead to a larger mag- nitude of error, it creates two opportunities. First, we can often easily quantify the average qualities of the resulting compressed vectors. Sec- ond, by design, the compressed vectors are robust under any data drift. Once a vector collection has been compressed, in other words, we can safely assume that any guarantees we were promised will continue to hold. | 2401.09350#356 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 357 | Sketching, to continue our unpacking of the concept, is a probabilistic tool to reduce the dimensionality of a vector space while preserving certain properties of interest with high probability. In its simplest form, sketching is a function Ï : Rd â Rd⦠, where d⦠< d. If the âproperty of interestâ is the Euclidean distance between any pair of points in a collection X , for instance, then Ï(·) must satisfy the following for random points U and V :
p [lise = 6(V)|l2 = |W = Va] < | >1-6,
for δ, ϵ â (0, 1).
The output of Ï(u), which we call the sketch of vector u, is a good sub- stitute for u itself. If all we care about, as we do in top-k retrieval, is the distance between pairs of points, then we retain the ability to deduce that information with high probability just from the sketches of a collection of vectors. Considering that d⦠is smaller than d, we not only compress the col- lection through sketching, but, as with quantization, we are able to perform distance computations directly on the compressed vectors. | 2401.09350#357 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 358 | The literature on sketching offers numerous algorithms that are designed to approximate a wide array of norms, distances, and other properties of data. We refer the reader to the excellent monograph by Woodruff [2014] for a tour of this rich area of research. But to give the reader a better understanding of the connection between sketching and top-k retrieval, we use the remainder of this chapter to delve into three algorithms. To make things more interesting, we specifically review these algorithms in the context of inner product for sparse vectors.
10.2 Linear Sketching with the JL Transform
The first is the quintessential linear algorithm due to Johnson and Linden- strauss [1984]. It is linear in the sense that Ï is simply a linear transformation, so that Ï(u) = Φu for some (random) matrix Φ â Rdâ¦Ãd. We will learn how to construct the required matrix and discuss what guarantees it has to offer. We then move to two sketching algorithms [Bruch et al., 2023] and [Daliri et al., 2023] whose output space is not Euclidean. Instead, the sketch of a vec- tor is a data structure, equipped with a distance function that approximates the inner product between vectors in the original space.
# 10.2 Linear Sketching with the JL Transform | 2401.09350#358 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 359 | # 10.2 Linear Sketching with the JL Transform
Let us begin by repeating the well-known result due to Johnson and Linden- strauss [1984], which we refer to as the JL Lemma:
Lemma 10.1 For ϵ â (0, 1) and any set X of m points in Rd, and an integer d⦠= â¦(ϵâ2 ln m), there exists a Lipschitz mapping Ï : Rd â Rd⦠such that
(1 â ϵ)â¥u â vâ¥2 2 ⤠â¥Ï(u) â Ï(v)â¥2 2 ⤠(1 + ϵ)â¥u â vâ¥2 2,
for all u, v â X .
This result has been studied extensively and further developed since its introduction. Using simple proofs, for example, it can be shown that the mapping Ï may be a linear transformation by a d⦠à d random matrix Φ drawn from a particular class of distributions. Such a matrix Φ is said to form a JL transform. | 2401.09350#359 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 360 | Definition 10.1 A random matrix Φ â Rdâ¦Ãd forms a Johnson-Lindenstrauss transform with parameters (ϵ, δ, m), if with probability at least 1 â δ, for any m-element subset X â Rd, for all u, v â X it holds that |â¨Î¦u, Φvâ© â â¨u, vâ©| ⤠ϵâ¥uâ¥2â¥vâ¥2.
There are many constructions of Φ that form a JL transform. It is trivial to show that when the entries of Φ are independently drawn from N (0, 1 ), d⦠then Φ is a JL transform with parameters (ϵ, δ, m) if d⦠= â¦(ϵâ2 ln(m/δ)). In yet another construction, Φ = 1â R, where R â {±1}dâ¦Ãd is a matrix whose d⦠entries are independent Rademacher random variables.
We take the latter as an example due to its simplicity and analyze its properties. As before, we refer the reader to [Woodruff, 2014] for a far more detailed discussion of other (more efficient) constructions of the JL transform.
145
146 10 Sketching
# 10.2.1 Theoretical Analysis | 2401.09350#360 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 361 | 145
146 10 Sketching
# 10.2.1 Theoretical Analysis
We are interested in analyzing the transformation above in the context of inner product. Specifically, we wish to understand what we should expect if, instead of computing the inner product between two vectors u and v in Rd, we perform the operation â¨Ru, Rvâ© in the transformed space in Rd⦠. Is the outcome an unbiased estimate of the true inner product? How far off may this estimate be? The following result is a first step to answering these questions for two fixed vectors.
Theorem 10.1 Fix two vectors u and v â Rd. Define ZSketch = â¨Ï(u), Ï(v)â© as the random variable representing the inner product of sketches of size dâ¦, dâ¦}dâ¦Ãd being a prepared using the projection Ï(u) = Ru, with R â {±1/ random Rademacher matrix. ZSketch is an unbiased estimator of â¨u, vâ©. Its distribution tends to a Gaussian with variance:
1 = (\lulBliolld + (u,v) ay ei 2). | 2401.09350#361 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 362 | 1 = (\lulBliolld + (u,v) ay ei 2).
Proof. Consider the random variable Z = ( y Rjuj)( Ye Reve), where R;,âs are Rademacher random wih oe 7 is clear that d.Z is the product of the sketch coordinate 7 (for any i): 6(u):6(v);# vale.
We can expand the expected value of Z as follows:
E[Z]=E [( x Ryuy)( » Rive) = = R?uj;v;] + E[ > Rj Reujve] j#k = Ss uv; E[R?] + Ss uj vz E[R; Rx] nn oc non = (u,v).
The variance of Z can be expressed as follows:
Var[Z] = E[Z?] â E[Z)? = E{( Le him)! (22 Rev) |- (u,v).
We have the following:
10.2 Linear Sketching with the JL Transform | 2401.09350#362 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 363 | We have the following:
10.2 Linear Sketching with the JL Transform
Eo Ryu) (SS Ravn)â | =E (> w+ Ss RiRjuiuy)(S> ut > ReRivew)| ij k kAl = |lull3|loll3 +E p> ur) ReRivevi] i kAl 0 +E D> uz Ss RR; uits| +E p> Ri Rjuyuj Ss ReRivwe| . (10.1) k idj Al kAl 0
The last term can be decomposed as follows:
B| Ss RiRjReRiasujreri| ipjAkAl +E [ Ss RAR ReRiaiujrevi| i=k jAIVixk, j=l ) RAR RR iujujrev4] . i#j,i=k,j=lViFj,i=lj=k & +
The first two terms are 0 and the last term can be rewritten as follows:
2E ~ uivi( >> UjV;j â uiv)| = 2(u,v)? â 2 Ss uzv?. (10.2) + J
We now substitute the last term in Equation (10.1) with Equation (10.2)
to obtain: | 2401.09350#363 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 364 | We now substitute the last term in Equation (10.1) with Equation (10.2)
to obtain:
Var[Z] = â¥uâ¥2 2â¥vâ¥2 2 + â¨u, vâ©2 â 2 i v2 u2 i . i
i Ï(u)iÏ(v)i is the sum of independent, identically distributed random variables. Furthermore, for bounded vectors u and v, the variance is finite. By the application of the Central Limit The- orem, we can deduce that the distribution of ZSketch tends to a Gaussian distribution with the stated expected value. Noting that Var[ZSketch] = 1/d2 ââ â¦
Theorem 10.1 gives a clear model of the inner product error when two fixed vectors are transformed using our particular choice of the JL trans- form. We learnt that inner product of sketches is an ubiased estimator of the inner product between vectors, and have shown that the error follows a Gaussian distribution.
147
(10.1)
148
10 Sketching
Let us now position this result in the context of top-k retrieval where the query point is fixed, but the data points are random. To make the analysis more interesting, let us consider sparse vectors, where each coordinate may be 0 with a non-zero probability. | 2401.09350#364 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 365 | Theorem 10.2 Fix a query vector q â Rd and let X be a random vector drawn according to the following probabilistic model. Coordinate i, Xi, is non- zero with probability pi > 0 and, if it is non-zero, draws its value from a dis- tribution with mean µ and variance Ï2 < â. Then, ZSketch = â¨Ï(q), Ï(X)â©, i piqi and with Ï(u) = Ru and R â {±1/ variance:
=] (2+ 0°)(Ilal3 op: - Yaa?) + (Sari)? -y fanâ) . i
Proof. It is easy to see that:
E[Zsxercu] = » Gi E[Xi] = = wd Pia
As for the variance, we start from Theorem 10.1 and arrive at the following expression:
7, (VilBBXIB) [email protected])"] 2 @EX2)), 03)
where the expectation is with respect to X. Let us consider the terms inside the parentheses one by one. The first term becomes:
Walls Ell [3] = llall3 SS ELX?] i = lall3 +07) Sop. a
The second term reduces to: | 2401.09350#365 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 367 | Putting all these terms back into Equation (10.3) yields the desired ex- ââ
Let us consider a special case to better grasp the implications of Theo- rem 10.2. Suppose p; b/d for some constant ~ for all dimensions i. Further assume, without loss of generality, that the (fixed) query vector has uni norm: ||q||z2 = 1. We can observe that the variance of Zgxwrcn decomposes into a term that is (u? + 07)(1 â 1/d)q/do, and a second term that is a function of 1/d?. The mean, on the other hand, is a linear function of the non-zero coordinates in the query: (>>; qi)~/d. As d grows, the mean o: Zsxnron tends to 0 at a rate proportional to the sparsity rate (w/d), while its variance tends to (1? + 0â)/do. | 2401.09350#367 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 368 | The above suggests that the ability of Ï(·) to preserve the inner product of a query point with a randomly drawn data point deteriorates as a function of the number of non-zero coordinates. For example, when the number of non-zero coordinates becomes larger, â¨Ï(q), Ï(X)â© for a fixed query q and a random point X becomes less reliable because the variance of the approxi- mation increases.
# 10.3 Asymmetric Sketching
Our second sketching algorithm is due to Bruch et al. [2023]. It is unusual in several ways. First, it is designed specifically for retrieval. That is, the objective of the sketching technique is not to preserve the inner product between points in a collection; in fact, as we will learn shortly, the sketch is not even an unbiased estimator. Instead, it is assumed that the setup is retrieval, where we receive a query and wish to rank data points in response. That brings us to its second unusual property: asymmetry. That means, only the data points are sketched while queries remain in the original space. With the help of an asymmetric distance function, however, we can easily compute an upper-bound on the query-data point inner product, using the raw query point and the sketch of a data point. | 2401.09350#368 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 369 | Finally, in its original construction as presented in [Bruch et al., 2023], the sketch was tailored specifically to sparse vectors. As we will show, however, it is trivial to modify the algorithm and adapt it to dense vectors.
In the rest of this section, we will first describe the sketching algorithm for sparse vectors, as well as its extension to dense vectors. We then describe how the distance between a query point in the original space and the sketch of any
149
150
10 Sketching
# Algorithm 5: Sketching of sparse vectors
Input: Sparse vector u â Rd. Requirements: h independent random mappings Ïo : [d] â [dâ¦/2]. Result: Sketch of u, {nz (u); u; u} consisting of the index of non-zero coordinates of u, the lower-bound sketch, and the upper-bound sketch. 1: Let u, u â Rdâ¦/2 be zero vectors 2: for all k â [ d⦠3: 4: 5: 6: end for 7: return {nz (u), u, u} 2 ] do I â {i â nz (u) | â o s.t. Ïo(i) = k} uk â maxiâI ui uk â miniâI ui | 2401.09350#369 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 370 | data point can be computed asymmetrically. Lastly, we review an analysis of the sketching algorithm.
# 10.3.1 The Sketching Algorithm
Algorithm 5 shows the logic behind the sketching of sparse vectors. It is assumed throughout that the sketch size, dâ¦, is even, so that dâ¦/2 is an integer. The algorithm also makes use of h independent random mappings Ïo : [d] â [dâ¦/2], where each Ïo(·) projects coordinates in the original space to an integer in the set [dâ¦/2] uniformly randomly.
Intuitively, the sketch of u â Rd is a data structure comprising of the index of its set of non-zero coordinates (i.e., nz (u)), along with an upper-bound sketch (u â Rdâ¦/2) and a lower-bound sketch (u â Rdâ¦/2) on the non-zero values of u. More precisely, the k-th coordinate of u (u) records the largest (smallest) value from the set of all non-zero coordinates in u that map into k according to at least one Ïo(·). | 2401.09350#370 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 371 | This sketching algorithm offers a great deal of flexibility. When data vec- tors are non-negative, we may drop the lower-bounds from the sketch, so that the sketch of u consists only of {nz (u), u}. When vectors are dense, the sketch clearly does not need to store the set of non-zero co- ordinates, so that the sketch of u becomes {u, u}. Finally, when vectors are dense and non-negative, the sketch of u simplifies to u.
10.3 Asymmetric Sketching
# Algorithm 6: Asymmetric distance computation for sparse vectors
Input: Sparse query vector q â Rd; sketch of data point u: {nz (u), u, u} Requirements: h independent random mappings Ïo : [d] â [dâ¦/2]. Result: Upper-bound on â¨q, uâ©. 1: s â 0 2: for i â nz (q) â© nz (u) do J â {Ïo(i) | o â [h]} 3: if qi > 0 then 4: 5: 6: 7: 8: end if 9: end for 10: return s
# 10.3.2 Inner Product Approximation | 2401.09350#371 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 372 | # 10.3.2 Inner Product Approximation
Suppose that we are given a query point q â Rd and wish to obtain an estimate of the inner product â¨q, uâ© for some data vector u. We must do so using only the sketch of u as produced by Algorithm 5. Because the query point is not sketched and, instead, remains in the original d-dimensional space, while u is only known in its sketched form, we say this computation is asymmetric. This is not unlike the distance computation between a query point and a quantized data point, as seen in Chapter 9.
This asymmetric procedure is described in Algorithm 6. The algorithm iterates over the intersection of the non-zero coordinates of the query vector and the non-zero coordinates of the data point (which is included in the sketch). It goes without saying that, if the vectors are dense, we may simply iterate over all coordinates. When visiting the i-th coordinate, we first form the set of coordinates that i maps to according to the hash functions Ïoâs; that is the set J in the algorithm. | 2401.09350#372 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 373 | The next step then depends on the sign of the query at that coordinate. When qi is positive, we find the least upper-bound on the value of ui from its upper-bound sketch. That can be determined by looking at uj for all j â J , and taking the minimum value among those sketch coordinates. When qi < 0, on the other hand, we find the greatest lower- bound instead. In this way, it is always guaranteed that the partial inner product is an upper-bound on the actual partial inner product, qiui, as stated in the next theorem.
Theorem 10.3 The quantity returned by Algorithm 6 is an upper-bound on the inner product of query and data vectors.
151
152
10 Sketching
# 10.3.3 Theoretical Analysis
Theorem 10.3 implies that Algorithm 6 always overestimates the inner prod- uct between query and data points. In other words, the inner product approx- imation error is non-negative. But what can be said about the probability that such an error occurs? How large is the overestimation error? We turn to these questions next. | 2401.09350#373 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 374 | Before we do so, however, we must agree on a probabilistic model of the data. We follow [Bruch et al., 2023] and assume that a random sparse vector X is drawn from the following distribution. All coordinates of X are mutually independent. Its i-th coordinate is inactive (i.e., zero) with probability 1 â pi. Otherwise, it is active and its value is a random variable, Xi, drawn iid from some distribution with probability density function (PDF) Ï and cumulative distribution function (CDF) Φ.
# 10.3.3.1 Probability of Error
Let us focus on the approximation error of a single active coordinate. Con- cretely, suppose we have a random vector X whose i-th coordinate is active: i â nz (X). We are interested in quantifying the likelihood that, if we esti- mated the value of Xi from the sketch, the estimated value, ËXi, overshoots or undershoots the actual value. | 2401.09350#374 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 375 | Formally, we wish to model P[ ËXi ̸= Xi], Note that, depending on the sign of the queryâs i-th coordinate ËXi may be estimated from the upper- bound sketch (X), resulting in overestimation, or the lower-bound sketch (X), resulting in underestimation. Because the two cases are symmetric, we state the main result for the former case: When ËXi is the least upper-bound on Xi, estimated from X:
ËXi = min jâ{Ïo(i) | oâ[h]} X j. (10.4)
Theorem 10.4 For large values of dâ¦, an active Xi, and ËXi estimated using Equation (10.4),
P [X: > Xi] ~ | f âexp ( - ml - F(a) >») "blade, Ai
where Ï(·) and Φ(·) are the PDF and CDF of Xi.
Extending this result to the lower-bound sketch involves replacing 1âΦ(α) with Φ(α). When the distribution defined by Ï is symmetric, the probabilities of error too are symmetric for the upper-bound and lower-bound sketches.
Proof of Theorem 10.4. Recall that ËXi is estimated as follows:
10.3 Asymmetric Sketching | 2401.09350#375 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 376 | Proof of Theorem 10.4. Recall that ËXi is estimated as follows:
10.3 Asymmetric Sketching
ËXi = min jâ{Ïo(i) | oâ[h]} X j.
So we must look up X j for values of j produced by Ïoâs.
Suppose one such value is k (i.e., k = Ïo(i) for some o â [h]). The event that X k > Xi happens only when there exists another active coordinate Xj such that Xj > Xi and Ïo(j) = k for some Ïo.
To derive P[X k > Xi], it is easier to think in terms of complementary events: X k = Xi if every other active coordinate whose value is larger than Xi maps to a sketch coordinate except k. Clearly the probability that any arbitrary Xj maps to a sketch coordinate other than k is simply 1 â 2/dâ¦. Therefore, given a vector X, the probability that no active coordinate Xj larger than Xi maps to the k-th coordinate of the sketch, which we denote by âEvent A,â is:
2 P [Event A| x] =1-(1- a) es is activel x;>X;_ 0
# P | 2401.09350#376 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 377 | 2 P [Event A| x] =1-(1- a) es is activel x;>X;_ 0
# P
Because d⦠is large by assumption, we can approximate eâ1 â (1 â 2/dâ¦)dâ¦/2 and rewrite the expression above as follows:
2h P [Event A | X] + 1â exp ( - > 1x, is active x,>x;): ° 5#i
Finally, we marginalize the expression above over Xjâs for j ̸= i to remove the dependence on all but the i-th coordinate of X. To simplify the expression, however, we take the expectation over the first-order Taylor expansion of the right hand side around 0. This results in the following approximation:
P [Event A | X; =a] ~ 1 exp (- 1 â 9(a)) 3). ° j#i
For ËXi to be larger than Xi, event A must take place for all h sketch coordinates. That probability, by the independence of random mappings, is:
~ 2h I P[X; > X; |X, = a ~ {1 â exp (= 7 â ®(a)) Â¥7p;)] â ° i#i | 2401.09350#377 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 378 | ~ 2h I P[X; > X; |X, = a ~ {1 â exp (= 7 â ®(a)) Â¥7p;)] â ° i#i
In deriving the expression above, we conditioned the event on the value of Xi. Taking the marginal probability leads us to the following expression for the event that ËXi > Xi for any i, concluding the proof:
153
154
10 Sketching
P[X;>X,] [[:-©0(- 2h ®(a)) p))] PCa) xi & | [1 â exp ( - ma - 2(a)) pj) | da)da. ii v
ââ
Theorem 10.4 offers insights into the behavior of the upper-bound sketch. The first observation is that the sketching mechanism presented here is more suitable for distributions where larger values occur with a smaller probability such as sub-Gaussian variables. In such cases, the larger the value is, the smaller its chance of being overestimated by the upper-bound sketch. Regardless of the underlying distribution, empiri- cally, the largest value in a vector is always estimated exactly. | 2401.09350#378 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 379 | The second insight is that there is a sweet spot for h given a particular value of dâ¦: using more random mappings helps lower the probability of error until the sketch starts to saturate, at which point the error rate increases. This particular property is similar to the behavior of a Bloom filter [Bloom, 1970].
# 10.3.3.2 Distribution of Error
We have modeled the probability that the sketch of a vector overestimates a value. In this section, we examine the shape of the distribution of error in the form of its CDF. Formally, assuming Xi is active and ËXi is estimated using Equation (10.4), we wish to find an expression for P[| ËXi â Xi| < ϵ] for any ϵ > 0.
Theorem 10.5 Suppose Xi is active and draws its value from a distribution with PDF and CDF Ï and Φ. Suppose further that ËXi is the least upper-bound on Xi, obtained using Equation (10.4). Then:
h PIX; -Xi<qdwl -|[ [1 â exp ( - m1 â (a+ )) X»i)| o(a)da. ° j#i | 2401.09350#379 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 380 | h PIX; -Xi<qdwl -|[ [1 â exp ( - m1 â (a+ )) X»i)| o(a)da. ° j#i
Proof. We begin by quantifying the conditional probability P[ ËXi â Xi ⤠ϵ | Xi = α]. Conceptually, the event in question happens when all values that collide with Xi are less than or equal to Xi+ϵ. This event can be characterized as the complement of the event that all h sketch coordinates that contain Xi collide with values greater than Xi + ϵ. Using this complementary event, we can write the conditional probability as follows:
10.3 Asymmetric Sketching
PLX; âX, <e| X; =a) =1-[1-(1- F yno-ter Esai)" ° h xe1- 1 (- Ha -a0+)En)| : Ft
We complete the proof by computing the marginal distribution over the sup- ââ port.
Given the CDF of ËXi â Xi and the fact that ËXi â Xi ⥠0, it follows that its expected value conditioned on Xi being active is:
Lemma 10.2 Under the conditions of Theorem 10.5:
5 h Bx â xy ~ [ [[1-o0(-Za-s0+9%n)| (a) da de. j#ft | 2401.09350#380 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 381 | 5 h Bx â xy ~ [ [[1-o0(-Za-s0+9%n)| (a) da de. j#ft
# 10.3.3.3 Case Study: Gaussian Vectors
Let us make the analysis more concrete by applying the results to random Gaussian vectors. In other words, suppose all active Xiâs are drawn from a zero-mean, unit-variance Gaussian distribution. We can derive a closed-form expression for the overestimation probability as the following corollary shows.
Corollary 10.1 Suppose the probability that a coordinate is active, pi, is equal to p for all coordinates of the random vector X â Rd. When an ac- tive Xi, drawn from N (0, 1), is estimated using the upper-bound sketch with Equation (10.4), the overestimation probability is:
h ~ h . do _ 2kh(dâ P[X,> XJ x1+)5 () Osage aN). k=1
We begin by proving the special case where h = 1.
Lemma 10.3 Under the conditions of Corollary 10.1 with h = 1, the prob- ability that the upper-bound sketch overestimates the value of Xi is:
~ do _ 2(d=1)p P[X; > Xi] <1-sy-pt* a), | 2401.09350#381 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 382 | ~ do _ 2(d=1)p P[X; > Xi] <1-sy-pt* a),
Proof. From Theorem 10.4 we have that:
P[X, > Xj] ~ | [1 â BOPP)" g(a) het | [1 _ o GMENEY FB (Q),
155
156 10 Sketching
Given that Xiâs are drawn from a Gaussian distribution, and using the approximation above, we can rewrite the probability of error as:
# f°
2 $ 1 f° â24=0P (1 @(a))]_â 92 PIX > x= [ [l-e "% Jeâ = da. oo
We now break up the right hand side into the following three sums, replacing 2(d â 1)p/d⦠with β for brevity:
1 â 2Ï 1 â 2Ï 1 â 2Ï
~ xo] 2 P[X; > Xj] ~ | Tme 7 da (10.5) â0o
0 -|[ PO) 6 F doy (10.6) ~oo V2T
# ~oo
â eâβ(1âΦ(α))eâ α2 2 dα. (10.7) 0 | 2401.09350#382 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 383 | # ~oo
â eâβ(1âΦ(α))eâ α2 2 dα. (10.7) 0
The sum in (10.5) is equal to the quantity 1. Let us turn to (10.7) first. We have that:
# a
a
As a result, we can write:
oo oo [ eee da = | 1 .-84-M@)) oF day 0 0 V20 x =e | 1 66r@),- da 0 «V20
By similar reasoning, and noting that:
aco 1 od 2 1â (a) 2+ 5 e- = dt, a 7 ee)" =a) S
we arrive at:
wit ) [ Le BU-#) oF da = Leâ 8(1âe- âe e⬠la = se âe ~oo V2T B
Plugging the results above into Equations (10.5), (10.6), and (10.7) results in:
10.3 Asymmetric Sketching
P(X > Xx 1- Fie 8) â Fe F(1 er!) =1-50-e (1404) =1- do (1 2) ~*~ 3dâ Dp © ,
which completes the proof.
Given the result above, the solution for the general case of h > 0 is straight- forward to obtain. | 2401.09350#383 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 384 | which completes the proof.
Given the result above, the solution for the general case of h > 0 is straight- forward to obtain.
Proof of Corollary 10.1. Using the binomial theorem, we have that:
P[X; > Xi] ~ | {1- et A-8()(4-DP) "Gq P(q) h => (I) [ce Betonemny tary k k=0
We rewrite the expression above for Gaussian variables to arrive at:
2 ~ 1 fk °° â2h(4=DP (1_9(a))\k 22 PIX > xX] ~ Fede k [ices yTe = da.
Following the proof of the previous lemma, we can expand the right hand side as follows:
h ~ 1 h OO 2kh(d=1)p 2 P[X; > X;] Â¥1+â= -1 â| ee OPN =F day Bio xder+ Fed (eof h do 2kh(dâ1)p h 1+ (1) te, a (;) 2kh(d â 1)p
which completes the proof.
Let us now consider the CDF of the overestimation error. | 2401.09350#384 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 385 | which completes the proof.
Let us now consider the CDF of the overestimation error.
Corollary 10.2 Under the conditions of Corollary 10.1 the CDF of overes- timation error for an active coordinate Xi â¼ N (0, Ï) is:
h P[X; -Xi<qdei1- f â exp ( - Mae - â)| ;
where Φâ²(·) is the CDF of a zero-mean Gaussian with standard deviation Ï
157
ââ
ââ
158 10 Sketching
Proof. When the active values of a vector are drawn from a Gaussian dis- tribution, then the pairwise difference between any two coordinates has a 2. As such, Gaussian distribution with standard deviation we may estimate 1 â Φ(α + ϵ) by considering the probability that a pair of coordinates (one of which having value α) has a difference greater than ϵ: P[Xi â Xj > ϵ]. With that idea, we may thus write:
1 â Φ(α + ϵ) = 1 â Φâ²(ϵ).
The claim follows by using the above identity in Theorem 10.5.
Corollary 10.2 enables us to find a particular sketch configuration given a desired bound on the probability of error, as the following lemma shows. | 2401.09350#385 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 386 | Corollary 10.2 enables us to find a particular sketch configuration given a desired bound on the probability of error, as the following lemma shows.
Lemma 10.4 Under the conditions of Corollary 10.2, and given a choice of ϵ, δ â (0, 1) and the number of random mappings h, P[ ËXi â Xi ⤠ϵ] with probability at least 1 â δ if:
d⦠> â 2h(d â 1)p(1 â Φâ²(ϵ)) log(1 â δ1/h) .
# 10.3.3.4 Error of Inner Product
We have thus far quantified the probability that a value estimated from the upper-bound sketch overestimates the original value of a randomly chosen coordinate. We also characterized the distribution of the overestimation error for a single coordinate and derived expressions for special distributions. In this section, we quantify the overestimation error when approximating the inner product between a fixed query point and a random data point using Algorithm 6. | 2401.09350#386 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 387 | To make the notation less cluttered, however, let us denote by ËXi our estimate of Xi. The estimated quantity is 0 if i /â nz (X). Otherwise, it is estimated either from the upper-bound sketch or the lower-bound sketch, depending on the sign of qi. Finally denote by ËX a reconstruction of X where each ËXi is estimated as described above.
Consider the expected value of ËXi â Xi conditioned on Xi being activeâ that is a quantity we analyzed previously. Let µi = E[ ËXi â Xi; Xi is active]. Similarly denote by Ï2 i its variance when Xi is active. Given that Xi is active with probability pi and inactive with probability 1 â pi, it is easy to show that E[ ËXi â Xi] = piµi (note we have removed the condition on Xi being active) and that its variance Var[ ËXi â Xi] = piÏ2
With the above in mind, we state the following result.
Theorem 10.6 Suppose that q â Rd is a sparse vector. Suppose in a random sparse vector X â Rd, a coordinate Xi is active with probability pi and, when active, draws its value from some well-behaved distribution (i.e., with finite
ââ
10.3 Asymmetric Sketching | 2401.09350#387 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 388 | ââ
10.3 Asymmetric Sketching
expectation, variance, and third moment). If µi = E[ ËXi â Xi; Xi is active] i = Var[ ËXi â Xi; Xi is active], then the random variable Z defined as and Ï2 follows:
. (a, X âX) = Vienz(q) GPibi Dieneo G (pio? + pi(1 â pi) ue?) ZA (10.8)
approximately tends to a standard Gaussian distribution as |nz (q)| grows.
Proof. Let us expand the inner product between q and ËX â X as follows:
(XÂ¥-X)= YO a(Xi-%). (10.9) ienz(q) x
The expected value of â¨q, ËX â Xâ© is:
El(a,XâX)}= SO aElZl= SD apie. iâ¬nz(q) iâ¬nz(q)
Its variance is:
Var[(q.X âX)]= Sa? VarlZi]= D0 a? (pio? + pi( â pide?) iâ¬nz(q) iâ¬nz(q) | 2401.09350#388 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 389 | Because we assumed that the distribution of Xi is well-behaved, we can conclude that Var[Zi] > 0 and that E[|Zi|3] < â. If we operated on the assumption that qiZiâs are independentâin reality, they are weakly dependentâalbeit not identically distributed, we can appeal to the Berry- ââ Esseen theorem to complete the proof.
# 10.3.4 Fixing the Sketch Size
It is often desirable for a sketching algorithm to produce a sketch with a con- stant size. That makes the size of a collection of sketches predictable, which is often required for resource allocation. Algorithm 5, however, produces a sketch whose size is variable. That is because the sketch contains the set of non-zero coordinates of the vector.
It is, however, straightforward to fix the sketch size. The key to that is the fact that Algorithm 6 uses nz (u) of a vector u only to ascertain if a queryâs non-zero coordinates are present in the vector u. In effect, all the sketch must provide is a mechanism to perform set membership tests. That is precisely what fixed-size signatures such as Bloom filters [Bloom, 1970] do, albeit probabilistically.
159
160
10 Sketching
Algorithm 7: Sketching with threshold sampling | 2401.09350#389 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 390 | 159
160
10 Sketching
Algorithm 7: Sketching with threshold sampling
Input: Vector u â Rd. Requirements: a random mapping Ï : [d] â [0, 1]. Result: Sketch of u, {I, V, â¥uâ¥2 2} consisting of the index and value of sampled coordinates in I and V, and the squared norm of the vector. 1: I, V â â
2: for i â nz (u) do 3: θ â d⦠u2 i â¥uâ¥2 2 4: 5: 6: end if 7: end for 8: return {I, V, â¥uâ¥2 2} if Ï(i) ⤠θ then Append i to I, ui to V
# 10.4 Sketching by Sampling | 2401.09350#390 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 391 | # 10.4 Sketching by Sampling
Our final sketching algorithm is designed specifically for inner product and is due to Daliri et al. [2023]. The guiding principle is simple: coordinates with larger values contribute more heavily to inner product than coordinates with smaller values. That is an obvious fact that is a direct result of the linearity of inner product: (u,v) = 0, ujv;Daliri et al. [2023] use that insight as follows. When forming the sketch of vector u, they sample coordinates (without replacement) from u according to a distribution defined by the magnitude of each coordinate. Larger values are given a higher chance of being sampled, while smaller values are less likely to be selected. The sketch, in the end, is a data structure that is made up of the index of sampled coordinates, their values, and additional statistics.
The research question here concerns the sampling process: How must we sample coordinates such that any distance computed from the sketch is an unbiased estimate of the inner product itself? The answer to that question also depends, of course, on how we compute the distance from a pair of sketches. Considering the non-linearity of the sketch, distance computation can no longer be the inner product of sketches. | 2401.09350#391 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 392 | In the remainder of this section, we review the sketching algorithm, de- scribe distance computation given sketches, and analyze the expected error. In our presentation, we focus on the simpler variant of the algorithm proposed by Daliri et al. [2023], dubbed âthreshold sampling.â
# 10.4.1 The Sketching Algorithm
Algorithm 7 presents the âthreshold samplingâ sketching technique by Daliri et al. [2023]. It is assumed throughout that the desired sketch size is dâ¦, and
10.4 Sketching by Sampling
Algorithm 8: Distance computation for threshold sampling 2} and {Iv, Vv, â¥vâ¥2
Input: Sketches of vectors u and v: {Iu, Vu, â¥uâ¥2 Result: An unbiased estimate of â¨u, vâ©. 1: s â 0 2: for i â Iu â© Iv do 3: 4: end for 5: return s s â s + uivi/ min(1, dâ¦u2 i /â¥uâ¥2 2, dâ¦v2 i /â¥vâ¥2 2) 2}.
that the algorithm has access to a random hash function Ï that maps integers in [d] to the unit interval. | 2401.09350#392 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 393 | that the algorithm has access to a random hash function Ï that maps integers in [d] to the unit interval.
The algorithm iterates over all non-zero coordinates of the input vector and makes a decision as to whether that coordinate should be added to the sketch. The decision is made based on the relative magnitude of the coordinate, as i /â¥uâ¥2 weighted by u2 i is large, coordinate i has a higher chance of being sampled, as desired.
Notice, however, that the target sketch size d⦠is realized in expectation only. In other words, we may end up with more than d⦠coordinates in the sketch, or we may have fewer entries. Daliri et al. [2023] propose a different variant of the algorithm that is guaranteed to give a fixed sketch size; we refer the reader to their work for details.
# 10.4.2 Inner Product Approximation
When sketching a vector using a JL transform, we simply get a vector in the dâ¦-dimensional Euclidean space, where inner product is well-defined. So if Ï(u) and Ï(v) are sketches of two d-dimensional vectors u and v, we ap- proximate â¨u, vâ© with â¨Ï(u), Ï(v)â©. It could not be more straightforward. | 2401.09350#393 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 394 | A sketch produced by Algorithm 7, however, is not as nice. Approximating â¨u, vâ© from their sketches requires a custom distance function defined for the sketch. That is precisely what Algorithm 8 outlines.
In the algorithm, it is understood that ui and vi corresponding to i â Iu â© Iv are present in Vu and Vv, respectively. These quantities, along with d⦠and the norms of the vectors are used to weight each partial inner product. The final quantity, as we will learn shortly, is an unbiased estimate of the inner product between u and v.
161
162
10 Sketching
# 10.4.3 Theoretical Analysis
Theorem 10.7 Algorithm 7 produces sketches that consist of at most d⦠coordinates in expectation.
Proof. The number of sampled coordinates is |I|. That quantity can be ex- pressed as follows:
d [Z| = Ss lier. i=1
Taking expectation of both sides and using the linearity of expectation, we obtain the following:
2 Ui E|Z|] = » Efliez] = » min(1,do7âb5) < do. lull
ââ
Theorem 10.8 Algorithm 8 yields an unbiased estimate of inner product. | 2401.09350#394 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 395 | ââ
Theorem 10.8 Algorithm 8 yields an unbiased estimate of inner product.
Proof. From the proof of the previous theorem, we know that coordinate i of an arbitrary vector u is included in the sketch with probability equal to:
min(1, d⦠u2 i â¥uâ¥2 2 ).
As such, the odds that i â Iu â© Iv is:
pi = min(1, d⦠u2 i â¥uâ¥2 2 , d⦠v2 i â¥vâ¥2 2 ).
Algorithm 8 gives us a weighted sum of the coordinates that are present in Iu â© Iv. We can rewrite that sum using indicator functions as follows:
a U. y liez.nz. i=l iVi Pi
In expectation, then:
d UzU, {Ui E [ lier, nz, i=1 Pi fu; iVi = y Pi = (u,v), Di (u,v) i=1 â
as required.
ââ
Theorem 10.9 If S is the output of Algorithm 8 for sketches of vectors u and v, then:
10.4 Sketching by Sampling
2 5 Var[] < > max (|leg|/3 13, [ul3|}v- (13). ° | 2401.09350#395 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 396 | 10.4 Sketching by Sampling
2 5 Var[] < > max (|leg|/3 13, [ul3|}v- (13). °
where uâ and vâ are the vectors u and v restricted to the set of non-zero coordinates common to both vectors (i.e., â = {i | ui ̸= 0 ⧠vi ̸= 0}).
Proof. We use the same proof strategy as in the previous theorem. In partic- ular, we write:
UiVi UV; Var[S] = Var ~ liez,nz, > â| = So Var [Liez.az, De i ] tex v tex 2,2 U;ZU; = y + Var [Liezunz. |. ice Pi
Turning to the term inside the sum, we obtain:
Var [liez.or,] = Pi â Pi
which is 0 if pi = 1 and less than pi otherwise. Putting everything together, we complete the proof: | 2401.09350#396 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 397 | Var [liez.or,] = Pi â Pi
which is 0 if pi = 1 and less than pi otherwise. Putting everything together, we complete the proof:
: UPUF nay no (uf /llell3) (v?/Mell3) Var[S] < Ss = llullallulla Ss : z 22 z ipa Pi Ce) lu3lhvl3 = SO max (u?/|/ull3,0? /llell3) ° iâ¬x, ppAl elles uy 1 uF d. «jul * wie ilu 2B (el ; esl) t ; do Wells Mell 1 p 5 = (hee l3llol3 + leul3the-(13) o 2 5 SG max (ee [Sle llellalle-|l2)- o
Theorem 10.9 tells us that, if we estimated â¨u, vâ© for two vectors u and v using Algorithm 8, then the variance of our estimate will be bounded by factors that depend on the non-zero coordinates that u and v have in common. Because nz (u) â© nz (v) has at most d entries, estimates of inner product based on Threshold Sampling should generally be more accurate than those obtained from JL sketches. This is particularly the case when u and v are sparse.
163 | 2401.09350#397 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 398 | 163
ââ
164
10 Sketching
# References
B. H. Bloom. Space/time trade-offs in hash coding with allowable errors. Commun. ACM, 13(7):422â426, jul 1970.
S. Bruch, F. M. Nardini, A. Ingber, and E. Liberty. An approximate algorithm for maximum inner product search over streaming sparse vectors. ACM Transactions on Information Systems, 42(2), nov 2023.
M. Daliri, J. Freire, C. Musco, A. Santos, and H. Zhang. Sampling methods for inner product sketching, 2023.
W. B. Johnson and J. Lindenstrauss. Extensions of lipschitz mappings into hilbert space. Contemporary Mathematics, 26:189â206, 1984.
D. P. Woodruff. Sketching as a tool for numerical linear algebra. Foun- dations and Trends in Theoretical Computer Science, 10(1â2):1â157, Oct 2014. ISSN 1551-305X.
Part IV Appendices
# Appendix A Collections | 2401.09350#398 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 399 | Part IV Appendices
# Appendix A Collections
Abstract This appendix gives a description of the vector collections used in experiments throughout this monograph. These collections demonstrate different operating points in a typical use-case. For example, some consist of dense vectors, others of sparse vectors; some have few dimensions and others are in much higher dimensions; some are relatively small while others contain a large number of points.
Table A.1 gives a description of the dense vector collections used through- out this monograph and summarizes their key statistics.
Table A.1: Dense collections used in this monograph along with select statis- tics.
Collection Vector Count Query Count Dimensions GloVe-25 [Pennington et al., 2014] GloVe-50 GloVe-100 GloVe-200 Deep1b [Yandex and Lempitsky, 2016] MS Turing [Zhang et al., 2019] Sift [Lowe, 2004] Gist [Oliva and Torralba, 2001] 1.18M 1.18M 1.18M 1.18M 9.99M 10M 1M 1M 10,000 10,000 10,000 10,000 10,000 100,000 10,000 1,000 25 50 100 200 96 100 128 960 | 2401.09350#399 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 400 | In addition to the vector collections above, we convert a few text collections into vectors using various embedding models. These collections are described in Table A.2. Please see [Bajaj et al., 2018] for a complete description of the MS MARCO v1 collection and [Thakur et al., 2021] for the others.
When transforming the text collections of Table A.2 into vectors, we use the following embedding models:
167
168
A Collections
Table A.2: Text collections along with key statistics. The rightmost two columns report the average number of non-zero entries in data points and, in parentheses, queries for sparse vector representations of the collections.
Collection Vector Count Query Count Splade Efficient Splade MS Marco Passage NQ Quora HotpotQA Fever DBPedia 8.8M 2.68M 523K 5.23M 5.42M 4.63M 6,980 3,452 10,000 7,405 6,666 400 127 (49) 153 (51) 68 (65) 131 (59) 145 (67) 134 (49) 185 (5.9) 212 (8) 68 (8.9) 125 (13) 140 (8.6) 131 (5.9)
⢠AllMiniLM-l6-v2:1 Projects text documents into 384-dimensional dense vectors for retrieval with angular distance. | 2401.09350#400 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 401 | ⢠AllMiniLM-l6-v2:1 Projects text documents into 384-dimensional dense vectors for retrieval with angular distance.
Tas-B [Hofst¨atter et al., 2021]: A bi-encoder model that was trained using supervision from a cross-encoder and a ColBERT [Khattab and Zaharia, 2020] model, and produces 768-dimensional dense vectors that are meant for MIPS. The checkpoint used in this work is available on HuggingFace.2 ⢠Splade [Formal et al., 2022]:3 Produces sparse representations for text. The vectors have roughly 30,000 dimensions, where each dimension cor- responds to a term in the BERT [Devlin et al., 2019] WordPiece [Wu et al., 2016] vocabulary. Non-zero entries in a vector reflect learnt term importance weights.
⢠Efficient Splade [Lassance and Clinchant, 2022]:4 This model produces queries that have far fewer non-zero entries than the original Splade model, but documents that may have a larger number of non-zero entries.
# References | 2401.09350#401 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 402 | # References
P. Bajaj, D. Campos, N. Craswell, L. Deng, J. Gao, X. Liu, R. Majumder, A. McNamara, B. Mitra, T. Nguyen, M. Rosenberg, X. Song, A. Stoica, S. Tiwary, and T. Wang. Ms marco: A human generated machine reading comprehension dataset, 2018.
# 1 Available at https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 2
Available at https://huggingface.co/sentence-transformers/
msmarco-distilbert-base-tas-b 3 Pre-trained checkpoint from HuggingFace available at https://huggingface.co/naver/ splade-cocondenser-ensembledistil 4 Pre-trained checkpoints for document and query encoders were obtained from https:// huggingface.co/naver/efficient-splade-V-large-doc and https://huggingface.co/ naver/efficient-splade-V-large-query, respectively.
# References
References | 2401.09350#402 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 403 | # References
References
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, June 2019.
T. Formal, C. Lassance, B. Piwowarski, and S. Clinchant. From distillation to hard negative sampling: Making sparse neural ir models more effective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, page 2353â2359, 2022.
S. Hofst¨atter, S.-C. Lin, J.-H. Yang, J. Lin, and A. Hanbury. Efficiently teaching an effective dense retriever with balanced topic aware sampling. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, page 113â122, 2021.
O. Khattab and M. Zaharia. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 39â48, 2020. | 2401.09350#403 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 404 | In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, page 2220â2226, 2022.
D. G. Lowe. Distinctive image features from scale-invariant keypoints. In- ternational Journal of Computer Vision, 60:91â110, 2004.
A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic rep- International Journal of Computer resentation of the spatial envelope. Vision, 42:145â175, 2001.
J. Pennington, R. Socher, and C. Manning. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Meth- ods in Natural Language Processing, pages 1532â1543, Oct. 2014.
N. Thakur, N. Reimers, A. R¨uckl´e, A. Srivastava, and I. Gurevych. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In 35th Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. | 2401.09350#404 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 405 | Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, L. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Googleâs neural machine translation system: Bridging the gap between human and machine translation, 2016. A. B. Yandex and V. Lempitsky. Efficient indexing of billion-scale datasets of deep descriptors. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pages 2055-2063, 2016.
H. Zhang, X. Song, C. Xiong, C. Rosset, P. N. Bennett, N. Craswell, and S. Tiwary. Generic intent representation in web search. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, pages 65â74, 2019.
169 | 2401.09350#405 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 406 | 169
Appendix B Probability Review
Abstract We briefly review key concepts in probability in this appendix.
# B.1 Probability
We identify a probability space denoted by (â¦, F, P) with an outcome space, an events set, and a probability measure. The outcome space, â¦, is the set of all possible outcomes. For example, when flipping a two-sided coin, the outcome space is simply {0, 1}. When rolling a six-sided die, it is instead the set [6] = {1, 2, . . . , 6}.
The events set F is a set of subsets of ⦠that includes ⦠as a member and is closed under complementation and countable unions. That is, if E â F, then we must have that EâF. Furthermore, the union of countably many events Eiâs in F is itself in F: âªiEi â F. A set F that satisfies these properties is called a Ï-algebra. | 2401.09350#406 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 407 | Finally, a function P : F > R is a probability measure if it satisfies the following conditions: P[Q] = 1; P[E] > 0 for any event E ⬠F; P[E®] = 1 â P[E); and, finally, for countably many disjoint events E;âs: P[U;E;] = Â¥, PIE. We should note that, P is also known as a âprobability distributionâ or simply a âdistribution.â The pair (2,F) is called a measurable space, and the elements of F are known as a measurable sets. The reason they are called âmeasurableâ is because they can be âmeasuredâ with P: The function P assigns values to them.
In many of the discussions throughout this monograph, we omit the out- come space and events set because that information is generally clear from context. However, a more formal treatment of our arguments requires a com- plete definition of the probability space.
171
172
B Probability Review
# B.2 Random Variables | 2401.09350#407 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 408 | 171
172
B Probability Review
# B.2 Random Variables
A random variable on a measurable space (â¦, F) is a measurable function X : ⦠â R. It is measurable in the sense that the preimage of any Borel set B â B is an event: X â1(B) = {Ï â ⦠| X(Ï) â B} â F.
A random variable X generates a Ï-algebra that comprises of the preimage of all Borel sets. It is denoted by Ï(X) and formally defined as Ï(X) = {X â1(B) | B â B}.
Random variables are typically categorized as discrete or continuous. X is discrete when it maps ⦠to a discrete set. In that case, its probability mass function is defined as P[X = x] for some x in its range. A continuous random variable is often associated with a probability density function, fX , such that:
b Pla< X <dj= i fx(x)dx.
Consider, for instance, the following probability density function over the real line for parameters µ â R and Ï > 0:
f (x) = â 1 2ÏÏ2 eâ (xâµ)2 2Ï2 . | 2401.09350#408 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 409 | f (x) = â 1 2ÏÏ2 eâ (xâµ)2 2Ï2 .
A random variable with the density function above is said to follow a Gaussian distribution with mean µ and variance Ï2, denoted by X â¼ N (µ, Ï2). When µ = 0 and Ï2 = 1, the resulting distribution is called the standard Normal distribution.
Gaussian random variables have attractive properties. For example, the sum of two independent Gaussian random variables is itself a Gaussian vari- able. Concretely, X1 â¼ N (µ1, Ï2 2), then X1 + X2 â¼ N (µ1 + µ2, Ï2 1 + Ï2 2). The sum of the squares of m independent Gaussian ran- dom variables, on the other hand, follows a Ï2-distribution with m degrees of freedom.
# B.3 Conditional Probability
Conditional probabilities give us a way to model how the probability of an event changes in the presence of extra information, such as partial knowledge about a random outcome. Concretely, if (â¦, F, P) is a probability space and A, B â F such that P[B] > 0, then the conditional probability of A given the event B is denoted by P[A | B] and defined as follows: | 2401.09350#409 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 410 | P[A | B] = P[A â© B] P[B] .
B.4 Independence
We use a number of helpful results concerning conditional probabilities in proofs throughout the monograph. One particularly useful inequality is what is known as the union bound and is stated as follows:
P[âªiAi] ⤠P[Ai]. i
Another fundamental property is the law of total probability. It states that, for mutually disjoint events Aiâs such that ⦠= âªAi, the probability of any event B can be expanded as follows:
P[B] = P[B | Ai] P[Ai]. i
This is easy to verify: the summand is by definition equal to P[B â© Ai] and, considering the events (B â© Ai)âs are mutually disjoint, their sum is equal to P[B â© (âªAi)] = P[B].
# B.4 Independence
Another tool that reflects the effect (or lack thereof) of additional knowledge on probabilities is the concept of independence. Two events A and B are said to be independent if P[A â© B] = P[A] Ã P[B]. Equivalently, we say that A is independent of B if and only if P[A | B] = P[A] when P[B] > 0. | 2401.09350#410 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 411 | Independence between two random variables is defined similarly but re- quires a bit more care. If X and Y are two random variables and Ï(X) and Ï(Y ) denote the Ï-algebras generated by them, then X is independent of Y if all events A â Ï(X) and B â Ï(Y ) are independent.
When a sequence of random variables are mutually independent and are drawn from the same distribution (i.e., have the same probability density function), we say the random variables are drawn tid: independent and identically-distributed. We stress that mutual independence is a stronger re- striction than pairwise independence: m events {E;}", are mutually inde- pendent if P(N;£;] = []; P[Ei).
We typically assume that data and query points are drawn iid from some (unknown) distribution. This is a standard and often necessary assumption that eases analysis.
173
174
B Probability Review
# B.5 Expectation and Variance
The expected value of a discrete random variable X is denoted by E[X] and defined as follows:
E[X] = x P[X = x]. x
When X is continuous, its expected value is based on the following Lebesgue integral:
E[X] = Xd P . ⦠| 2401.09350#411 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 412 | When X is continuous, its expected value is based on the following Lebesgue integral:
E[X] = Xd P . â¦
So when a random variable has probability density function fX , its expected value becomes:
E[X] = xfX (x)dx.
For a nonnegative random variable X, it is sometimes more convenient to unpack E X as follows instead:
E[X] = i P[X > aldz.
A fundamental property of expectation is that it is a linear operator. For- mally, E[X + Y ] = E[X] + E[Y ] for two random variables X and Y . We use this property often in proofs.
We state another important property for independent random variables that is easy to prove. If X and Y are independent, then E[XY ] = E[X] E[Y ].
The variance of a random variable is defined as follows:
Var[X] = E [(X - E[X])*] = ELX} - B[Xâ).
Unlike expectation, variance is not linear unless the random variables involved are independent. It is also easy to see that Var[aX] = a2 Var[X] for a constant a.
# B.6 Central Limit Theorem | 2401.09350#412 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 413 | # B.6 Central Limit Theorem
The result known as the Central Limit Theorem is one of the most useful tools in probability. Informally, it states that the average of iid random variables with finite mean and variance converges to a Gaussian distribution. There are several variants of this result that extend the claim to, for example, independent but not identically distributed variables. Below we repeat the formal result for the iid case.
B.6 Central Limit Theorem
Theorem B.1 Let Xiâs be a sequence of n iid random variables with finite mean µ and variance Ï2. Then, for any x â R:
ny * tim p | AMX X) He ) 1 Fut, o?/n oo V2T : noo Z
implying that Z â¼ N (0, 1).
175
Appendix C Concentration of Measure
Abstract By the strong law of large numbers, we know that the average of a sequence of m iid random variables with mean µ converges to µ with probability 1 as m tends to infinity. But how far is that average from µ when m is finite? Concentration inequalities helps us answer that question quantitatively. This appendix reviews important inequalities that are used in the proofs and arguments throughout this monograph.
# C.1 Markovâs Inequality
Lemma C.1 For a nonnegative random variable X and a nonnegative con- stant a ⥠0: | 2401.09350#413 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 414 | # C.1 Markovâs Inequality
Lemma C.1 For a nonnegative random variable X and a nonnegative con- stant a ⥠0:
E[X] a P[X ⥠a] ⤠.
Proof. Recall that the expectation of a nonnegative random variable X can be written as:
E[X] = P[X ⥠x]dx. 0
Because P[X ⥠x] is monotonically nonincreasing, we can expand the above as follows to complete the proof:
E[X] > [ax > alder > [ax > aldx = aP[X > al.
ââ
177
178
C Concentration of Measure
# C.2 Chebyshevâs Inequality
Lemma C.2 For a random variable X and a constant a > 0:
Var[X] P [|X -E[X]| >a] <=.
# P
Proof.
= 2 2 Var[X] P [|x - ELx]]| > a| =P|(X -E[X]) >a| <=.
where the last step follows by the application of Markovâs inequality.
Lemma C.3 Let {Xi}n µ < â and variance Ï2 < â. For δ â (0, 1), with probability 1 â δ:
exo lz nia n | 2401.09350#414 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 415 | exo lz nia n
Proof. By Lemma C.2, for any a > 0:
i< o/n P |= X,- |= < ;
Setting the right-hand-side to δ, we obtain:
Ï2 na2 = δ =â a = Ï2 δn ,
which completes the proof.
# C.3 Chernoff Bounds
Lemma C.4 Let {X;}"_, be independent Bernoulli variables with success probability p;. Define X = 30, X; and up = E[X] = 30, p;. Then:
30, X; and up = E[X] = [x > (1+ )n| < eT MOH,
P ⤠eâh(δ)µ,
where,
h(t) = (1 + t) log(1 + t) â t.
Proof. Using Markovâs inequality of Lemma C.1 we can write the following for any t > 0:
# a
ââ
C.4 Hoeffdingâs Inequality
tx P Bs >(d+ 5)n| =P [e'* > fra] < Fle
Expanding the expectation, we obtain: | 2401.09350#415 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 416 | tx P Bs >(d+ 5)n| =P [e'* > fra] < Fle
Expanding the expectation, we obtain:
E [e'*] =E [eee] =E Te] = [Ele] i i = Il (pie! +(1 ~ pi) i =[[(@+pi(e'- 1) < Tew = el De, by (l+t <e') i
Putting all this together gives us:
e(e De P(x > (1+5)u] < Sate (C.1)
This bound holds for any value t > 0, and in particular a value of t that minimizes the right-hand-side. To find such a t, we may differentiate the right-hand-side, set it to 0, and solve for t to obtain:
µete(etâ1)µ et(1+δ)µ â µ(1 + δ) e(etâ1)µ et(1+δ)µ = 0 =â µet = µ(1 + δ) =â t = log(1 + δ).
Substituting t into Equation (C.1) gives the desired result.
# C.4 Hoeffdingâs Inequality | 2401.09350#416 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |