This chapter presents some basic mathematical notions, along with some more advanced results from probability, focusing on useful inequalities.
It illustrates some of the mathematical notions using R, in some cases illustrating different algorithmic approaches to the same problem.
A set is a collection of objects, called members or elements of the set, without regard for their order. $a \in A$, pronounced "$a$ is an element of $A$," "$a$ is in $A$," or "$a$ is a member of $A$" means that $a$ is an element of the set $A$. This is the same as writing $A \ni a$, which is pronounced "$A$ contains $a$." If $a$ is not an element of $A$, we write $a \not \in A$. Sets may be described explicitly by listing their contents, or implicitly by specifying a property that all elements of the set share, or a condition that they satisfy. The contents of sets are enclosed in curly braces: $\{ \}$. Examples:
$B$ is a subset of $A$, written $B \subset A$ or $A \supset B$, if every element of the set $B$ is also an element of the set $A$. Thus ${\mathbf N} \subset {\mathbf Z} \subset {\mathbf Q} \subset \Re \subset {\mathbf C}$. The empty set $\emptyset$ is a subset of every set. If $A \subset B$ and $B \subset A$, $A$ and $B$ are the same set, and we write $A = B$. If $B$ is not a subset of $A$, we write $B \not \subset A$ or $A \not \supset B$. $B$ is a proper subset of $A$ if $B \subset A$ but $A \not \subset B$.
The complement of $A$ (with respect to the universe ${\mathcal X}$), written $A^c$ or $A'$, is the set of all objects under consideration (${\mathcal X}$) that are not elements of $A$. That is, $A^c \equiv \{ a \in {\mathcal X} : a \not \in A\}$.
The intersection of $A$ and $B$, written $A \cap B$ or $AB$, is the set of all objects that are elements of both $A$ and $B$: $$ A \cap B \equiv \{a: a \in A \mbox{ and } a \in B \}. $$ If $A \cap B = \emptyset$, we say $A$ and $B$ are disjoint or mutually exclusive.
The union of $A$ and $B$, written $A \cup B$, is the set of all objects that are elements of $A$ or of $B$ (or both): $$ A \cup B \equiv \{a: a \in A \mbox{ or } a \in B \mbox{ or both} \}. $$
The difference or set difference of $A$ and $B$, $A \setminus B$, pronounced "$A$ minus $B$," is the set of all elements of $A$ that are not elements of $B$: $$ A \setminus B \equiv \{a \in A : a \not \in B \} = A \cap B^c. $$
Intervals are special subsets of ${\mathbf R}$: $$ [a, b] \equiv \{x \in \Re : a \le x \le b\} $$ $$ (a, b] \equiv \{x \in \Re : a < x \le b\} $$ $$ [a, b) \equiv \{x \in \Re : a \le x < b\} $$ $$ (a, b) \equiv \{x \in \Re : a < x < b\}. $$ Sometimes we have a collection of sets, indexed by elements of another set: $\{A_\beta : \beta \in B \}$. Then $B$ is called an index set. If $B$ is a subset of the integers ${\mathbf Z}$, usually we write $A_i$ or $A_j$, etc., rather than $A_\beta$. If $B = {\mathbf N}$, we usually write $\{A_j\}_{j=1}^\infty$ rather than $\{A_\beta : \beta \in {\mathbf N} \}$. $$ \bigcap_{\beta \in B} A_\beta \equiv \{a: a \in A_\beta \;\;\forall \beta \in B \}. $$ ($\forall$ means "for all.") If $B = \{1, 2, \ldots, n\}$, we usually write $\bigcap_{j=1}^n A_j$ rather than $\bigcap_{j \in \{1, 2, \ldots, n\}} A_j$. The notation $\bigcup_{\beta \in B} A_\beta$ and $\bigcup_{j=1}^n A_j$ are defined analogously.
A collection of sets $\{A_\beta : \beta \in B \}$ is pairwise disjoint if $A_\beta \cap A_{\beta'} = \emptyset$ whenever $\beta \ne \beta'$. The collection $\{A_\beta : \beta \in B\}$ exhausts or covers the set $A$ if $A \subset \bigcup_{\beta \in B} A_\beta$. The collection $\{A_\beta : \beta \in B \}$ is a partition of the set $A$ if $A = \cup_{\beta \in B} A_\beta$ and the sets $\{A_\beta : \beta \in B \}$ are pairwise disjoint. If $\{A_\beta : \beta \in B \}$ are pairwise disjoint and exhaust $A$, then $\{A_\beta \cap A : \beta \in B \}$ is a partition of $A$.
A set is countable if its elements can be put in one-to-one correspondence with a subset of ${\mathbf N}$. A set is finite if its elements can be put in one-to-one correspondence with $\{1, 2, \ldots, n\}$ for some $n \in {\mathbf N}$. If a set is not finite, it is infinite. ${\mathbf N}$, ${\mathbf Z}$, and ${\mathbf Q}$ are infinite but countable; ${\mathbf R}$ is infinite and uncountable.
The notation $\# A$, pronounced "the cardinality of $A$" is the size of the set $A$. If $A$ is finite, $\# A$ is the number of elements in $A$. If $A$ is not finite but $A$ is countable (if its elements can be put in one-to-one correspondence with the elements of ${\mathbf N}$), then $\# A = \aleph_0$ (aleph-null). If the elements of $A$ can be put in one-to-one correspondence with the elements of $\mathbf{R}$, then $\#A = c$, the cardinality of the continuum.
The power set of a set $A$, denoted $2^A$, is the set of all subsets of the set $A$. For example, the power set of $\{a, b, c\}$ is $$ \{ \emptyset, \{a\}, \{b\}, \{c\}, \{a, b\}, \{a, c\}, \{b, c\}, \{a, b, c\} \}. $$ If $A$ is a finite set, the cardinality of the power set of $A$ is $2^{\# A}$. This can be seen as follows: suppose $\# A = n$ is finite. Consider the elements of $A$ to be written in some canonical order. We can specify an element of the power set by an $n$-digit binary number. The first digit is 1 if the first element of $A$ is in the subset, and 0 otherwise. The second digit is 1 if the second element of $A$ is in the subset, and 0 otherwise, etc. There are $2^n$ $n$-digit binary numbers, so there are $2^n$ subsets. The cardinality of the power set of ${\mathbf N}$ is not $\aleph_0$.
If $A$ is a finite set, $B$ is a countable set and $\{A_j : \beta \in B \}$ is a partition of $A$, then $$ \# A = \sum_{\beta \in B} \# A_\beta. $$
Suppose $A = A_1 \cup A_2$, but that possibly $A_1 A_2 \ne \emptyset$, so $\{A_1, A_2\}$ might not be a partition of $A$, because $A_1$ and $A_2$ are not necessarily disjoint. Then still $$\#A = \#A_1 + \#A_2 - \#A_1A_2.$$
This is seen most easily using a Venn diagram, and can be proved by constructing a partition of $A$, $A = A_1A_2^c \cup A_1^cA_2 \cup A_1A_2$, and noting that $\#A_1 + \#A_2 = \#A_1A_2^c + \#A_1^cA_2 + 2\#A_1A_2$.
If $A = A_1 \cup A_2 \cup A_3$ but $\{A_1, A_2, A_3\}$ are not necessarily disjoint, then still
$$\#A = \#A_1 + \#A_2 + \#A_3 - \#A_1A_2 - \#A_1A_3 - \#A_2A_3 + \#A_1A_2A_3.$$More generally, if $A = \cup_{i=1}^n A_i$, then the Inclusion-Exclusion Principle holds:
$$ \#A = \sum_{i=1}^n \#A_i - \sum_{1 \le i_1 < i_2 \le n} \#(A_{i_1}A_{i_2}) + \sum_{1 \le i_1 < i_2 < i_3 \le n} \#(A_{i_1}A_{i_2}A_{i_3}) - \cdots +(-1)^{k-1} \sum_{1 \le i_1 < i_2 < \cdots < i_k \le n} \# (A_{i_1}A_{i_2} \cdots A_{i_k}) + \cdots $$If $A\subset B$ and $B\subset A$ then $A = B$. (If two sets are subsets of each other, they are the same set.)
$\emptyset \subset A$. (The empty set is a subset of every set.)
$\emptyset \cup A = A$.
(The union of the empty set and any other set is that set.)
(The intersection of the empty set and any other set is empty.)
If $A \subset B$ and $B \subset C$ then $A \subset C$. (The subset relation is transitive.)
If $A \subset B$ then $B^c \subset A^c$.
(Complementation reverses the subset relation.)
$A \cap B \subset A$. Moreover, $A\cap B = A$ if and only if $A \subset B$.
$A \subset A\cup B$. Moreover, $A = A\cup B$ if and only if $B \subset A$.
$(A\cap B)^c = A^c \cup B^c$. (de Morgan)
$(A\cup B)^c = A^c\cap B^c$. (de Morgan)
$A \cap B = B \cap A$. (Intersection is commutative.)
$A\cup B = B\cup A$. (Union is commutative.)
$A\cap (B\cap C) = (A\cap B)\cap C$. (Intersection is associative.)
$A\cup (B\cup C) = (A\cup B)\cup C$. (Union is associative.)
$A\cap (B\cup C) = (A\cap B)\cup (A\cap C)$. (Distribution of intersection over union.)
$A\cup (B\cap C) = (A\cup B)\cap (A\cup C)$. (Distribution of union over intersection.)
# Make three sets, A = {1, 2, 3}, B = {"a", "b", "green", 3}, C = {1, 2}
A <- c(1, 2, 3, 3); # this is a list but not a set, because 3 is listed twice
cat('A:',A,'\n')
A <- unique(A); # this is a set containing the unique elements of A
cat('A as a set (duplicates removed):',A,'\n')
B <- c("a", "b", "green", 3);
C <- 1:2; #a different construction
#
# Set membership
cat('1 is in A:',1 %in% A,'\n') # is 1 an element of A?
# another way to check set membership
cat('1 is in A (2nd method):',is.element(1, A),'\n')
cat('"green" is in A:',"green" %in% A,'\n')
cat('"green" is in B:',"green" %in% B,'\n')
A: 1 2 3 3 A as a set (duplicates removed): 1 2 3 1 is in A: TRUE 1 is in A (2nd method): TRUE "green" is in A: FALSE "green" is in B: TRUE
# Subsets. We will look at two approaches:
# A is a subset of B if a is an element of B for every a in A
cat('C a subset of A:',all(C %in% A),'\n') # C is a subset of A
cat('C a subset of B:',all(C %in% B),'\n') # C is not a subset of B
cat('C is a subset of C:',all(C %in% C),'\n') # C is a subset of itself
# A is a subset of B if A\B is empty
cat('C a subset of A:',length(setdiff(C,A)) == 0,'\n') # C is a subset of A
cat('C a subset of B:',length(setdiff(C,B)) == 0,'\n') # C is not a subset of B
cat('C a subset of C:',length(setdiff(C, C)) == 0,'\n') # C is a subset of itself
C a subset of A: TRUE C a subset of B: FALSE C is a subset of C: TRUE C a subset of A: TRUE C a subset of B: FALSE C a subset of C: TRUE
# unions and intersections
cat('AUB:',union(A,B), '\n') # union of A and B
cat('AB:',intersect(A, B), '\n') # intersection of A and B
# set difference
cat('A\\B:',setdiff(A,B), '\n') # A \ B
AUB: 1 2 3 a b green AB: 3 A\B: 1 2
# Cardinality of a finite set
length(unique(A)) # A has 3 (unique) elements
length(unique(B)) # B has 4 (unique) elements
# Power set of a finite set
# Recall that there's a 1:1 mapping between a subset of a set A of size n and an n digit binary number.
# We will enumerate the 2^n binary numbers of length n and map them to the corresponding subsets of A
# 0 maps to {} and 2^n maps to A
powerset <- function(A) {
n <- length(A); # cardinality of A
ps <- 2^(0:(n-1)); # powers of 2 from 0 to n-1: 2^0, 2^1, ..., 2^(n-1)
return(lapply(0:(2^n-1), function(x) A[bitwAnd(ps, x) != 0]))
}
'2^A:'
powerset(A)
'2^B:'
powerset(B)
The notation $(s_1, s_2, \ldots, s_n) \equiv (s_j)_{j=1}^n$ denotes an ordered $n$-tuple consisting of $s_1$ in the first position, $s_2$ in the second position, etc. The parentheses are used instead of curly braces to distinguish $n$-tuples from sets: $(s_j)_{j=1}^n \ne \{s_j\}_{j=1}^n$. The $k$th component of the $n$-tuple $s = (s_j)_{j=1}^n$, is $s_k$, $k = 1, 2, \ldots, n$. Two $n$-tuples are equal if their components are equal. That is, $(s_j)_{j=1}^n = (t_j)_{j=1}^n$ means that $s_j = t_j$ for $j = 1, \ldots, n$. In particular, $(s, t) \ne (t, s)$ unless $s=t$. In contrast, $\{s, t \} = \{ t, s \}$ always.
The Cartesian product of $S$ and $T$ is $S \bigotimes T \equiv \{(s, t): s \in S \mbox{ and } t \in T\}$. Unless $S = T$, $S \bigotimes T \ne T \bigotimes S$. ${\mathbf R}^n$ is the Cartesian product of ${\mathbf R}$ with itself, $n$ times; its elements are $n$-tuples of real numbers. If $s$ is the $n$-tuple $(s_1, s_2, \ldots, s_n) = (s_j)_{j=1}^n$ and $t$ is the $n$-tuple $(t_1, t_2, \ldots, t_n) = (t_j)_{j=1}^n$, then $s = t$ iff $s_j = t_j$ for all $j=1, \ldots, n$.
# Cartesian products in R
# Again, there are a variety of ways to do this, including SQL-like approaches (outer joins)
expand.grid(A, B, C)
# now using an outer product
outer(outer(A, B, FUN="paste"),C, FUN="paste")
Var1 | Var2 | Var3 | |
---|---|---|---|
1 | 1 | a | 1 |
2 | 2 | a | 1 |
3 | 3 | a | 1 |
4 | 1 | b | 1 |
5 | 2 | b | 1 |
6 | 3 | b | 1 |
7 | 1 | green | 1 |
8 | 2 | green | 1 |
9 | 3 | green | 1 |
10 | 1 | 3 | 1 |
11 | 2 | 3 | 1 |
12 | 3 | 3 | 1 |
13 | 1 | a | 2 |
14 | 2 | a | 2 |
15 | 3 | a | 2 |
16 | 1 | b | 2 |
17 | 2 | b | 2 |
18 | 3 | b | 2 |
19 | 1 | green | 2 |
20 | 2 | green | 2 |
21 | 3 | green | 2 |
22 | 1 | 3 | 2 |
23 | 2 | 3 | 2 |
24 | 3 | 3 | 2 |
Let $A$ be a finite set with $\# A = n$. A permutation of (the elements of) $A$ is an element $s$ of $\bigotimes_{j=1}^n A = A^n$ whose components are distinct elements of $A$. That is, $s = (s_j)_{j=1}^n \in A^n$ is a permutation of $A$ if $\# \{s_j\}_{j=1}^n = n$. There are $n! = n(n-1)\cdots 1$ permutations of a set with $n$ elements: there are $n$ choices for the first component of the permutation, $n-1$ choices for the second (whatever the first might be), $n-2$ for the third (whatever the first two might be), etc. This is an illustration of the Fundamental Rule of Counting: in a sequence of $n$ choices, if there are $m_1$ possibilites for the first choice, $m_2$ possibilities for the second choice (no matter which was chosen in the first place), $m_3$ possibilities for the third choice (no matter which were chosen in the first two places), and so on, then there are $m_1 m_2 \cdots m_n = \prod_{j=1}^n m_j$ possible sequences of choices in all.
The number of permutations of $n$ things taken $k$ at a time, ${}_nP_k$, is the number of ways there are of selecting $k$ of $n$ things, then permuting those $k$ things. There are $n$ choices for the object that will be in the first place in the permutation, $n-1$ for the second place (regardless of which is first), etc., and $n-k+1$ choices for the item that will be in the $k$th place. By the fundamental rule of counting, it follows that ${}_nP_k = n(n-1)\cdots(n-k+1) = n!/(n-k)!$.
The number of subsets of size $k$ that can be formed from $n$ objects is $$ {}_nC_k = {{n}\choose{k}} = {}_nP_k/k! = n(n-1)\cdots(n-k+1)/k! = \frac{n!}{k!(n-k)!}, $$ since each subset of size $k$ can be ordered into $k!$ permutations.
Here are some useful identities:
$$ {{n}\choose{k}} = {{n}\choose{n-k}} $$$$ {{n}\choose{0}} = 1$$$$ {{n}\choose{1}} = n $$$$ {{n} \choose {k}} = \prod_{m=0}^{k-1} \frac{n-m}{m+1} $$$$ {{n} \choose {k}} = {{n-1}\choose{k-1}} + {{n-1} \choose {k}}.$$Because the power set of a set with $n$ elements can be partitioned as $$ \cup_{k=0}^n \left \{ \mbox{all subsets of size $k$} \right \}, $$ and since the power set has $2^n$ elements, it follows that $$ \sum_{j=0}^n {{n} \choose {k}} = 2^n. $$
# Combinatorics in R
# factorials
n <- 5;
k <- 3;
cat('n!:',factorial(n),'\n')
# number of combinations of n objects taken k at a time
cat('nCk:',choose(n,k),'\n')
# number of permutations of n objects taken k at a time
permu <- function(n, k) {
permu <- 1;
s <- 0;
while (s < k) {
permu <- permu*(n-s);
s <- s+1;
}
return(permu);
}
cat('nPk:',permu(n, k),'\n')
n!: 120 nCk: 10 nPk: 60
Write an R function that constructs all permutations of a finite list.
The function should take as input a list s, and produce as output all permutations of the elements of the list. Items should be considered unique based on their position in the list.
For instance,
s <- c('a','b','c');
permute(s)
should return the 6 permutations of a, b, and c:
a b c
a c b
b a c
b c a
c a b
c b a
Write your function from scratch; i.e., do not use any libraries that are not part of the core of R.
As a matter of mathematics, ${{n}\choose{k}} = \frac{n!}{k!(n-k)!}$, but this is a terrible way to compute ${{n}\choose{k}}$. When $n$ is large, the numerator $n!$ is enormous, and can overflow finite-precision calculations even for values of $n$ and $k$ for which ${{n}\choose{k}}$ is small. Moreover, relying on cancellation in finite-precision arithmetic can lead to large errors, whether the cancellation is through division or subtraction.
So, rather than compute ${{n}\choose{k}}$ by dividing factorials, it's better to build it by multiplying ratios.
Let's have a look in R.
# illustrating overflow of factorial for large n
n <- 1000;
cat('1000!:',factorial(n),'\n') # inf
# however, 1000 choose 2 is only (1000*999)/2 = 499500
cat('1000C2:',choose(n, 2),'\n')
# let's code this from scratch in a stable way
myChoose <- function(n, k) {
# no error checking. If this were for production, we'd check that the arguments are integers
# and that n >= k >= 0.
m <- 0;
myc <- 1;
while (m < k) {
myc <- myc * (n-m)/(m+1);
m <- m+1;
}
return(myc)
}
cat('1000C2 (2nd way):',myChoose(n, 2),'\n')
Warning message: In factorial(n): value out of range in 'gammafn'
1000!: Inf 1000C2: 499500 1000C2 (2nd way): 499500
A related same issue comes up in subtracting large numbers.
For instance, algebraically, $x^2 - (x-1)^2 = 2x-1$. But if the mantissa of $x$ is large, squaring $x$ and $x-1$ will overflow the precision of the calculation, and their difference of the computed squares will be numerically wrong.
x <- 9999999999999999999999999; # 25 digits
cat('x^2 - (x-1)^2:',x^2 - (x-1)^2,'\n')
cat('2x-1:', 2*x-1, '\n')
x^2 - (x-1)^2: 0 2x-1: 2e+25
Functions are subsets of Cartesian products. We write $f: {\mathcal X} \rightarrow {\mathcal Y}$, pronounced "$f$ maps ${\mathcal X}$ into ${\mathcal Y}$" or "$f$ is a function with domain ${\mathcal X}$ and co-domain ${\mathcal Y}$" if $f \subset {\mathcal X} \bigotimes {\mathcal Y}$ such that for each $x \in {\mathcal X}$, $\exists 1 y \in {\mathcal Y}$ such that $(x, y) \in f$. (The notation $\exists 1 y$ means that there exists exactly one value of $y$.) The set ${\mathcal X}$ is called the domain of $f$ and ${\mathcal Y}$ is called the co-domain of $f$. If the ${\mathcal X}$-component of an element of $f$ is $x$, we denote the ${\mathcal Y}$-component of that element of $f$ by $fx$ or $f(x)$, so that $(x, fx) \in f$; we write $f: x \mapsto y=f(x)$. The functions $f$ and $g$ are equal if they are the same subset of ${\mathcal X} \bigotimes {\mathcal Y}$, which means that they have the same domain ${\mathcal X}$, and $fx = gx$ $\forall x \in {\mathcal X}$.
Let $A \subset {\mathcal X}$. The image of $A$ under $f$ is $$ fA = f(A) \equiv \{ y \in {\mathcal Y} : (x, y) \in f \mbox{ for some } x \in A\}. $$ More colloquially, we would write this as $$ fA = \{ y \in {\mathcal Y} : f(x) = y \mbox{ for some } x \in A \}. $$ If $f{\mathcal X}$ is a proper subset of ${\mathcal Y}$, $f$ is into. If $f{\mathcal X} = {\mathcal Y}$, $f$ is onto. For $B \subset {\mathcal Y}$, the inverse image of $B$ under $f$ or pre-image of $B$ under $f$ is $$ f^{-1}B \equiv \{ x \in {\mathcal X} : fx \in B \}. $$ Similarly, $f^{-1}y \equiv \{ x \in {\mathcal X} : fx = y \}$ If $\forall y \in {\mathcal Y}$, $\# \{ f^{-1} y \} \le 1$, $f$ is one-to-one (1:1). If $f$ is one-to-one and onto, i.e., if $\forall y \in {\mathcal Y}$, $\# \{ f^{-1}y \} = 1$, $f$ is a bijection.
The element $e$ is called the identity.
(Every element has an inverse.)
(The group operation is associative.)
If, in addition, for every $a, b \in {\mathcal G}$, $a \times b = b \times a$ (if the group operation commutes), we say that $({\mathcal G}, \times)$ is an Abelian group or commutative group.
Examples of groups include the real numbers together with ordinary addition, $({\mathbf R}, +)$; the real numbers other than zero together with ordinary multiplication, $(\Re \setminus \{0\}, \times)$; the rational numbers together with ordinary addition, $({\mathbf Q}, +)$; and the integers 0 to $p-1$, $p$ prime, together with addition modulo $p$, $( \{0, 1, \ldots, p-1\}, +)$.
(The inverse from the left is also the inverse from the right; equivalently, $(a^{-1})^{-1} = a$.)
also the identity from the right.)
with identity $1$.
$a \times (b+c) = a \times b + a \times c$ and $(a+b) \times c = a\times c + b \times c$.
The additive inverse of $a$ is denoted $-a$; the multiplicative inverse of $a$ is $a^{-1} = 1/a$.
Examples: $({\mathbf R}, \times, +)$, where $\times$ is ordinary (real) multiplication and $+$ is ordinary (real) addition. The complex numbers ${\mathbf C}$, with complex multiplication and addition.
These (and the extended reals) are the only fields we will use.
We shall use the following conventions:
With these conventions, $([-\infty, \infty], \times, +)$ is a field.
such that:
$\forall x \in {\mathcal X}$.
$\forall \alpha \in {\mathcal F}$, $x, y \in {\mathcal X}$. (distribution)
$\forall \alpha, \beta \in {\mathcal F}, x \in {\mathcal X}$. (association)
$\forall \alpha, \beta \in {\mathcal F}$, $x \in {\mathcal X}$. (distribution)
A set $\{x_\alpha : \alpha \in A \} \subset {\mathcal X}$ is linearly dependent if there exist constants $\{ \beta_\alpha : \alpha \in A \} \subset {\mathcal F}$, not all equal to zero, such that $\sum_{\alpha \in A} \beta_\alpha x_\alpha = 0$. A set is linearly independent if it is not linearly dependent.
The span of a set of elements of ${\mathcal X}$ is a subspace of ${\mathcal X}$.
If you cannot find necessary conditions, give sufficient ones.
Example: Let ${\mathcal X} = \Re^n$, ${\mathcal Y} = \Re^m$, ${\mathcal F} = \Re$, and $M$ be an $m$ by $n$ matrix.
It is a theorem that all bases for ${\mathcal X}$ have the same cardinality.
A linear vector space endowed with a norm is called a normed linear vector space.
An (real) inner product on a linear vector space ${\mathcal X}$ is a mapping $\langle \cdot , \cdot \rangle$ from ${\mathcal X} \bigotimes {\mathcal X}$ into $\Re$ such that $\forall x, y \in {\mathcal X}$ and all $\alpha \in \Re$,
A (real) Hilbert space is an inner product space that is sequentially complete: every Cauchy sequence has a limit.
$\Re$ with multiplication.
$\Re^n$ with the usual dot product, Euclidean norm.
$\ell^2$: infinite sequences with bounded sum of squares, $\langle x, y \rangle = \sum_{j=1}^\infty x_j y_j$.
$L^2$, space of Lebesgue square-integrable functions, inner product $\langle x, y \rangle = \int x(t)y(t)\mu(dt)$.
Hilbert spaces are self-dual: every bounded linear functional on a Hilbert space can be written as the inner product with an element of the space (this is the Riesz Hilbert space representation theorem). Moreover, there is an isometric isomorphism between ${\mathcal H}^*$, normed dual of ${\mathcal H}$, and ${\mathcal H}$ itself: Every bounded linear functional on ${\mathcal H}$ can be written as the inner product with an element of ${\mathcal H}$; every element of ${\mathcal H}$ defines a bounded linear functional on ${\mathcal H}$; and the operator norm of the linear functional and Hilbert-space norm of the element are equal.
Finite-dimensional subspaces of Hilbert spaces are closed.
Two elements $x, y$ of a Hilbert space ${\mathcal H}$ are orthogonal if $\langle x, y \rangle = 0$; then we write $x \perp y$.
An element $x$ of a Hilbert space ${\mathcal H}$ is orthogonal to a subset ${\mathcal Y} \subset {\mathcal H}$ iff $x \perp y$ $\forall y \in {\mathcal Y}$; then we write $x \perp {\mathcal Y}$.
Let ${\mathcal Y}$ be a closed subspace of Hilbert space ${\mathcal H}$. Then each $x \in {\mathcal X}$ has a unique decomposition $x = x_\| + x_\perp$ where $x_\| \in {\mathcal Y}$ and $x_\perp \perp {\mathcal Y}$.
Proof. [TO DO!]
[TO DO!]
If, in addition, for any $x$, $y \in {\mathcal X}$, either $x \le y$ or $y \le x$ (or both), then $\le$ is an order.
The usual $\le$ is an order on ${\mathbf R}$. Set inclusion gives an order among the power set (set of all subsets) of a given set: $x \le y$ if $x \subset y$. One can think of orders as subsets of ${\mathcal X} \bigotimes {\mathcal X}$ or as mappings from ${\mathcal X} \bigotimes {\mathcal X} \rightarrow \{0, 1\}$. Henceforth, we take ${\mathbf R}$ to be ordered by $\le$.
Examples. The positive numbers are a cone in $\Re$. The positive orthant is a cone in $\Re^n$.
Examples. In ${\mathbf R}$, $[0, \infty)$ is the usual positive cone. In ${\mathbf R}^n$, the positive orthant ($n$-tuples whose components are all non-negative) is the usual positive cone. The set of non-negative functions and the set of monotone functions can form the positive cones in some function spaces.
Note: the relation $\le$ defined above is almost--but not quite--a partial order on the linear vector space ${\mathcal X}$: it does not satisfy the second axiom. If $P$ satisfies $(x \in P \mbox{ and } -x \in P)$ implies $x = 0$, then the relation $\le$ is a partial order.
$\forall x_1, x_2 \in D$, and all $\alpha \in [0, 1]$.
Note that convexity depends on the definition of the positive cone $P$ in ${\mathcal Y}$! Convexity and related (such as pseudoconvexity and quasiconvexity), play a crucial role in optimization theory. (A mapping $T$ is quasiconvex if its domain is convex and $T(\alpha x_1 + (1-\alpha) x_2) \le \max(T x_1, Tx_2)$, $\forall x_1, x_2 \in D$, and all $\alpha \in [0, 1]$.)
The Projection Theorem is one of the most widely used results in statistics. [TO DO!]
Let ${\mathcal X}$ be a Hilbert space and let ${\mathcal Y}$ be a subspace of ${\mathcal X}$. Let $x \in {\mathcal X}$. Consider the minimization problem $$ \min_{y \in {\mathcal Y}} \| x - y \|. $$ Suppose $y_0$ is a solution to the problem. Then $x - y_0 \perp {\mathcal Y}$.
[TO DO!]
Let $\{ a_j \}_{j=1}^n$ and $\{q_j\}_{j=1}^n$ be sets of nonnegative numbers with $\sum_j q_j = 1$. Then $$ \prod_{j=1}^n a_j^{q_j} \le \sum_{j=1}^n q_j a_j. $$
[TO DO!]
[TO DO!]
If ${\mathcal H}$ is a Hilbert space and $x , y \in {\mathcal H}$, then $$ \left |\langle x, y \rangle \right | \le \|x\| \| y \|. $$
This follows Lugosi (2006) rather closely.
If $X$ is a nonnegative real-valued random variable, $$ {\mathbb E} X = \int_0^\infty {\mathbb P} \{X \ge x\} dx $$
If $\phi$ is a convex function from ${\mathcal X}$ to $\Re$, then $\phi({\mathbb E} X) \le {\mathbb E} \phi(X)$.
From the tail-integral formula, $$ {\mathbb E} X \ge \int_0^t {\mathbb P} \{X \ge x\} dx \ge t {\mathbb P} \{X \ge t \}, $$ which yields Markov's Inequality: $$ {\mathbb P} \{ X \ge t \} \le \frac{{\mathbb E} X}{t}. $$
Moreover, for any strictly monotonic function $f$ and nonnegative $X$, we have the Generalized Markov Inequality: $$ {\mathbb P} \{ X \ge t \} = {\mathbb P} \{ f(X) \ge f(t) \} \le \frac{{\mathbb E} f(X)}{f(t)}. $$ In particular, for any real-valued $X$ and real $q > 0$, $| X - {\mathbb E} X |$ is a nonnegative random variable and $f(x) = x^q$ is strictly monotonic, so $$ {\mathbb P} \{| X - {\mathbb E} X | \ge t \} = {\mathbb P} \{ | X - {\mathbb E} X |^q \ge t^q \} \le \frac{{\mathbb E} | X - {\mathbb E} X |^q}{t^q}. $$ Chebychev's inequality is a special case: $$ {\mathbb P} \{ | X - {\mathbb E} X |^2 \ge t^2 \} \le \frac{{\mathbb E} | X - {\mathbb E} X |^2}{t^2} = \frac{{\mbox{Var}} X}{t^2}. $$
Apply the Generalized Markov Inequality with $f(x) = e^{sx}$ for positive $s$ to obtain the Chernoff Bound: $$ {\mathbb P} \{ X \ge t \} = {\mathbb P} \{ e^{sX} \ge e^{st} \} \le \frac{{\mathbb E} e^{sX}}{e^{st}} $$ for all $s$. For particular $X$, one can optimize over $s$.
Let $\{ X_j \}_{j=1}^n$ be independent, and define $S_n \equiv \sum_{j=1}^n X_j$. Applying the Chernoff Bound yields $$ {\mathbb P} \{ S_n - {\mathbb E} S_n \ge t \} \le e^{-st} {\mathbb E} e^{sS_n} = e^{-st} \prod_{j=1}^n e^{s(X_j - E X_j)}. $$ Hoeffding bounds the moment generating function for a bounded random variable $X$: If $a \le X \le b$ with probability 1, then $$ {\mathbb E} e^{sX} \le e^{s^2 (b-a)^2/8}, $$ from which follows Hoeffding's tail bound.
If $\{X_j\}_{j=1}^n$ are independent and ${\mathbb P} \{a_j \le X_j \le b_j\} = 1$, then $$ {\mathbb P} \{ S_n - {\mathbb E} S_n \ge t \} \le e^{-2t^2/\sum_{j=1}^n (b_j - a_j)^2} $$ and $$ {\mathbb P} \{ S_n - {\mathbb E} S_n \le -t \} \le e^{-2t^2/\sum_{j=1}^n (b_j - a_j)^2} $$
Suppose $f$ is a convex, real function and ${\mathcal X}$ is a finite set. Let $\{X_j \}_{j=1}^n$ be a simple random sample from ${\mathcal X}$ and let $\{Y_j \}_{j=1}^n$ be an iid uniform random sample (with replacement) from ${\mathcal X}$. Then $$ {\mathbb E} f \left ( \sum_{j=1}^n X_j \right ) \le {\mathbb E} f \left ( \sum_{j=1}^n Y_j \right ). $$
Suppose $\{X_j \}_{j=1}^n$ are independent with ${\mathbb E} X_j = 0$ for all $j$, ${\mathbb P} \{ | X_j | \le c\} = 1$, $\sigma_j^2 = {\mathbb E} X_j^2$ and $\sigma = \frac{1}{n} \sum_{j=1}^n \sigma_j^2$. Then for any $\epsilon > 0$, $$ {\mathbb P} \{ S_n/n > \epsilon \} \le e^{-n \epsilon^2 / 2(\sigma^2 + c\epsilon/3)}. $$
Next chapter: Linear Algebra