Contents

COMP0199

1 Sequences of Functions

or denotes the sequence of functions where for all .

Sequence of Functions
An assignment of a function to each .

Hereafter, assume as the domain / target interval.

1.1 Backgroud

For numbers, the sequence tends to iff.

A sequence of numbers is Cauchy if

A sequence is Cauchy iff. it is convergent (in complete metric spaces). Note that the Cauchy Criterion does not refer to the limit itself.

1.2 Pointwise Convergence

If exists and is finite for all ,
we can define the limit function .
is said to converge pointwise towards .

Pointwise Convergence

The sequence converges pointwise to if

I.e., as .

A sequence of functions is pointwise Cauchy if is Cauchy for all .
A sequence is pointwise Cauchy iff. it converges pointwise.

1.3 Uniform Convergence

Uniform Convergence

The sequence converges uniformly to on the interval if

I.e., as .

Uniform convergence implies pointwise convergence, to the same limit. (It is stronger.)

Uniformity is preserved:
The limit continuous if all is continuous and converges uniformly.

Integrals are preserved:
If converges uniformly to on , then

Derivatives are more complicated:
If converges for some and converges uniformly on , then

…and the convergence of is also uniform.

Note: the uniform convergence on the sequence of derivatives converges the “shape” of the function, while the single-point convergence on the sequence itself asserts that the solution is uniquely well-defined. (Consider .)

2 Series of Functions

denotes the series of functions, i.e., the sum of the sequence .

2.1 easy stuff

is more specifically .
Clearly, this (infinite) sum may be undefined.

The partial sums of a series is a sequence itself:

converges pointwise on if the sequence of partial sums converges pointwise on .

Weierstrass M-Test
A test for determining whether a series of functions converges normally:
converges normally on
if
for , converges.

Note that normal convergence implies uniform and absolute convergence.
The Weierstrass M-test may be seen as the Cauchy Criterion for uniform convergence.
Furthermore, the -dependence is eliminated by effectively taking the suprema:

2.2 Power Series

Power Series

Series of functions of the form

A power series always converges for certain :

Radius of Convergence

Some where a power series converges uniformly on any closed interval .
Divergence when ; unknown when .
Can be infinite for series convergent everywhere.

D’Alembert’s Ratio Test

A test to find the radius of convergence of a power series.
For some , consider the limit of the sequence of -ratios :

I.e., convergence when .

2.3 Taylor Series

Taylor Series are a special case of power series.
They converge (within the radius) to arbitrary functions.

For a function infinitely differentiable at , its Taylor Series centered at is

2.3.1 Well-known Functions

2.3.2 Approximations

The degree- Taylor polynomial of centered at is just the series truncated after terms:

for .

is the tail of the terms in the series.
There are numerous ways to express this sub-series beyond a simple summation.

Lagrange Form of the Taylor Remainder

where is a function of and is strictly between and (open interval).

The derivation of such a term that “captures” an entire series comes from the Mean Value Theorem; the heavy lifting is done by , which is a function itself and is not computable, but is bounded and thus provides an upper bound for . Let :

Since is constant, we have, for :

3 Linear Algebra

Assume , , and to be vector spaces.
Assume vector spaces to be over the field of .
Assume over functions be pointwise addition.

3.1 easy stuff

Linear Independence
A subset is linearly independent if

Alternatively, the map is injective.

Linear
A map is linear iff.
additivity and
homogeneity
are preserved (for arbitrary and ).

Additivity and homogeneity are commonly combined into .

Linear Map

A homomorphism (see below) of a vector space.

Basis

(…of a vector space ) is a subset that is linearly independent and spans .

Direct Sum

A structure-propagating Cartesian product2 with operations defined element-wise. Notated with “oplus” .

The above is more specifically an external direct sum; an internal direct sum conditionally exists for substructures (of a common abelian-group-like structure) iff. their intersection is trivial;
For a structure with substructures and ,
is well-defined1 (“ and are in direct sum”) iff. .

Decomposition Statements

The expression asserts that and are in direct sum and that their sum is exactly .

Note: with internal direct sums, the notation has overloaded meanings: it denotes the same sum set as while simultaneously asserting that .

Alternative definitions of internal direct sum:
Substructures and of a common are in direct sum iff. every element of can be written in exactly one way as the sum of some and . I.e.,

Again, if also . A more general version, assuming :

Note that the second denotes the external direct sum.
The generalized intersection criteria: .

Note: the direct product is a structure-propagating Cartesian product;
the external direct sum is a direct product with the restriction that the resulting tuples can only have finitely many non-zero elements (which matters if there are infinite operands).

If is a basis of and is a basis of , then iff. are linearly independent. ( and are in the same ambient space.)
The internal direct sum is associative; the external direct sum is associative up to isomorphism.

Composition
For functions and ,
composition is a binary operator where .

Composition verifies (for arbitrary functions ):

is an abelian group;
is a ring.
; see below section on Endomorphisms.

Note that, for linear (so symmetric distributivity), we obtain the properties as matrix multiplication.

Dimension
The dimension of a vector space , notated , is the cardinality of (any) basis of .
Note that the Steinitz exchange lemma guarantees any two bases of have the same cardinality.
.

For a function :

Injectivity
i.e.,
Surjectivity
i.e.,
Bijectivity
Injectivity and Surjectivity.

3.2 Linear Maps

Assume to be linear maps, and by default .

Image
Kernel
Rank
Rank-Nullity Theorem

Note that Rank is only defined for images of finite dimension, and
Rank-Nullity Theorem assumes to have finite dimension too.

Kernel criterion for Injectivity
is injective iff. .

Clearly, injectivity implies . (Can be proved using the Rank-Nullity Theorem.)

Matrix Column Rank criterion for Injectivity
is injective iff. for a matrix representing , its columns are linearly independent.

Clearly, surjectivity implies . (Can be proved using the Rank-Nullity Theorem.)

Note that any linear map can be made surjective trivially by setting its codomain () to its image.

Isomorphism
Both-way-structure-preserving bijections.

Examples of structure properties to preserve: linearity, group operation.
The structure of vector spaces is linearity, and linear maps are, well, linear bidirectionally, so all bijective linear maps are isomorphisms.

Note2: The concept of structure-preserving maps come from category theory, where categories explicitly define its such maps, called morphisms; vector spaces’ morphisms are linear maps by definition, so bijective linear maps are isomorphisms :shrug:.

Clearly, bijectivity implies .
Intuitive way to express bijectivity: .

Two structures are isomorphic iff. there exists an isomorphism between them.
For and over the same field, iff. they are isomorphic.

Matrix representations of a bijection must be square.
A matrix is invertible iff. it represents a bijection.

3.3 Endomorphisms

Endomorphism
A map ; same domain and codomain.
Automorphism
An endomorphism that is also an isomorphism.

For endomorphisms, injectivity, surjectivity, and bijectivity are equivalent (in finite dimension).

is bijective iff. it maps a basis to a basis.

3.3.1 Change of Basis

Consider an endomorphism of represented by matrix in the basis .

is the matrix representing in where is a matrix whose columns are the coordinate vectors of in .

Transition Matrix
(aka. change-of-basis matrix) what is above.

Observe that if , then .

4 Matrix Reduction

Assume denotes some finite-dimensional vector space over ( or , as specified).
Assume denotes some endomorphism of .
Assume is the matrix representation of any such .

Observe that as is an endomorphism and , we can always represent it as some square matrix.

4.1 Abstract ahh Stuff

Invariant Subspace
(aka. stable subspace) is an invariant subspace under if .
I.e., .

Observe that and are always invariant subspaces under any ;
and are always invariant subspaces under a given .
Note: it is also written “ is -invariant” for some subspace and transformation .

Highly relevant to the concepts below: the COMP0147 notes on cosets and quotient groups.

Assume a subspace.

Coset

For any , the coset (or in equivalence class notation) is the set .

Quotient Space

A quotient (vector) space is a new vector space over the set of all cosets of .
I.e., .
Vector operations are defined as and .
Observe that .

Note: there are no conditions on ; one can form a quotient vector space from any subspace. (Due to addition being abelian and multiplication being external.) This is contrary to general groups or rings, which require specific properties (normal subgroups or ideals) on the substructure.

Complementary Subspace
A subspace is complementary to iff. .
Construction from a Quotient Space:
Pick a basis for .
Pick a as a representative for every coset in the basis.
is a subspace complementary to .

Complements to a subspace are generally non-unique, and a complementary subspace can be viewed geometrically as a section of the corresponding quotient space. However, the addition of an inner product (to define orthogonality) allows the construction of a unique, canonical complement .
(See orthogonal projections and exact sequences.)

Observe that for any complement . Specifically, the -restriction of the projection map is an isomorphism.

The construction above produces a linear map with , called a section (or splitting) of the projection . Its image is the complement . Given a choice of , the map provides an explicit isomorphism . Because this isomorphism relies on the arbitrary choice of representatives to build , there is no canonical isomorphism between and .

4.2 Block Matrices

For a -invariant , one can construct, in a deliberate basis, a block triangular matrix.
Let be a subspace complementary to .

Pick a basis for and a basis for .
is then a basis for . (Why? …because I said so.).

Now, we construct a matrix for in .
Recall that a matrix can be constructed column-wise for as ‘s coordinates (in ):

…where is a block; is a block of zeroes.

Observe:

Observe that will be zeroes if is also invariant, yielding a block diagonal matrix.

Usefully, and .

4.3 Eigenvalues and Eigenvectors

Eigenvector

A nonzero vector is an eigenvector of some iff. there exists a scalar where .

Eigenvalue

The scalar above is the eigenvalue of corresponding to the eigenvector .

Other equations characterizing eigenvalues and eigenvectors wrt. matrix representations include

(See section below on the computational utility of these.)

An eigenvector’s span is an invariant subspace where the associated endomorphism acts like a scaling:
An eigenvector spans a 1-dimensional invariant subspace, and a linear transformation on 1-dimensional invariant subspaces can only be a scaling.

Eigenspace
For a given eigenvalue , we have its eigenspace .
Equivalently, where is a basis of eigenvectors for (and is the geometric multiplicity).

Observe that any is an eigenvector corresponding to .
Observe that . (Unequal in nontrivial cases.)

Geometric Multiplicity

The geometric multiplicity of an eigenvalue of , , is .
This is the “physical” dimension of the associated eigenspace.

Algebraic Multiplicity

The algebraic multiplicity of an eigenvalue of , , is its multiplicity as a root of (‘s characteristic polynomial; see below).

Linear Independence of Eigenvectors

For distinct eigenvalues , their corresponding eigenvectors are linearly independent.
Alternatively: Take a basis for each eigenspace; the union of all these bases, across distinct eigenvalues, is linearly independent.
This allows the construction of bases out of eigenvectors if we have enough eigenvalues.

4.3.1 Computing Eigenvalues

Assume and .

Derivation from to the characteristic polynomial:

Observe that a given is a solution
is non-trivial
is not injective surjective bijective
is not invertible
.
I.e., any that satisfies any statement above is an eigenvalue, and the set of all such solutions is all eigenvalues of the given transformation .

Characteristic Polynomial

The characteristic polynomial for matrix represents the

The roots of (expanding with the Leibniz formula) are exactly the eigenvalues of .

Aside2: there’s a very subtle (and mostly insignificant) difference between and . I refuse to elaborate further.

Observe that cannot more than eigenvectors or eigenvalues.
Observe that has degree .
is a root iff. is non-invertible.

4.3.2 Computing Eigenvectors

Obviously, we just compute for a given to obtain its eigenspace.

4.4 Polynomials and Linear Maps

The characteristic polynomial computes eigenvalues.
The roots and their multiplicity provide information on possible diagonalization.

Complex vs Real fields


Per the fundamental theorem of algebra, not all roots of the characteristic polynomial may be real.
For , some matrix has eigenvalues.
For , has exactly eigenvalues (counted with algebraic multiplicity).

Cayley-Hamilton Theorem

for any .
Effectively, this means is linearly dependent, and itself demonstrates the exact dependence (coefficients). E.g., one can:

If , factors, according to how, well, polynomials factor in , some distinct irreducible quadratic terms and linear terms. For distinct real eigenvalues:

If , the polynomial can be fully factored, for distinct eigenvalues:

Kernel Decomposition Theorem


For :

For :

The stuff are generalized eigenspaces…

4.5 Matrix Reduction

Matrix reduction is finding subspaces where the associated transformation is simple.

5 Differential Calculus

6 Euclidean Spaces

7 Numerical Methods

8 Probabilities

PDF:

Evidently, .

Cumulative DF:

also written as :

if and are independent.

8.1 classic distros

8.1.1 Exponential

8.1.2 Uniform

8.1.3 Normal

8.1.4 what

and whatever Student’s t-distribution and Chi-squared distribution are.

8.2 Multivariate

…joint probability distributions…

marginal distributions are single-variate distributions at some fixed values of all other variables. (E.g. a “slice” of a 2-d distribution.)

Law of Large Numbers:
For a sequence of independent and identically distributed (i.i.d.) random variables, the sum

9 Statistics