Vectors
Important definitions
- Span of \(U\): the set of all linear combinations of vectors in \(U\). The span describes all points reachable by a linear combination of the vectors in \(U\).
- Subspace of \(\mathbf{R}^{n}\): this is a non-empty subset \(S\) of \(\mathbf{R}^{n}\) with properties that all vectors are closed under addition and scalar multiplication. This can be described as a vector space that is a subset of a larger vector space. Remember that the subspace is the span of its constituent vectors.
- The intersection of a subspace is simply all the vectors that are shared by both.
- Every subspace of \(\mathbf{R}^{n}\) contains the zero vector
- If \(U\) is a non-empty, finite subset of \(\mathbf{R}^{n}\) then the span of \(U\) is a subspace of \(\mathbf{R}^{n}\) and is called the subspace spanned or generated by \(U\). This means you need to know what elements are in \(U\) to define a ‘subset’ of this space by spanning the entire set of these vectors.
- Basis: If for a subspace \(S\), all vectors \(s \in S\) are linearly independent and they span \(S\), you have a basis of a vector space.
- There cannot be more vectors in a basis than the dimension of the vector space they span. It is not possible to have three vectors in a basis for \(\mathbf{R}^{2}\); hence if you have this many vectors, then it is not a basis as one vector will be a linear combination of another before it.
Questions
- Express the vector \(a\) as a linear combination of vectors \((b_{1}, b_{2}, ..., b_{n})\).
- You want to find the solution for the simultaneous equation \(a=\alpha b_{1}+\beta b_{2} + ... + \gamma b_{n}\), solving for the unknown terms.
- Determine if the set of vectors is linearly dependent.
- If the set contains the zero vector then the set is automatically linearly dependent.
- Any singleton set is linearly independent
- Find out if any successive vectors in this set is a linear combination of the previous (linearly independent) vectors in the set.
- Determine if a set of vectors is linearly independent (matrix method).
- Populate a matrix \(M\) where each column of \(M\) is one of the vectors from the set.
- If the determinant is non-zero then this set is a basis as it is linearly independent.
- Removing a linearly dependent vector from the set of its predecessors.
- Represent this vector as a linear combination of its predecessors.
- If your values for \(\alpha, \beta, \gamma, ...\) are all \(0\) then this vector is linearly dependent from the predecessors.
- Show that the set \(S\) is a subspace of \(\mathbf{R}^{n}\). Find a basis for, and the dimension of, \(S\).
- Show that the subspace contains the zero vector.
- Show that each vector in the set \(S\) is linearly independent to find the vectors that will form a basis for \(\mathbf{R}^{n}\).
- Form a basis for \(\mathbf{R}^n\) from the set of vectors \(S\).
- Iterate through \(S\) and remove any linearly dependent vectors as you go along; you may end up with different bases for the same subspace.
- Find the coordinates of a vector \(v\) with respect to basis \(U\) and basis \(W\).
- The coordinates of \(v\) for any basis \(A\) is simply the coefficients required to represent \(v\) as a linear combination of the basis vectors \(a \in A\).
Matrices
Basic arithmetic
- Multiply the two matrices \(M\) and \(N\)
- Two matrices are only multipliable if the number of columns on \(M\) is the same as the number of rows on \(N\).
- Visualise this by taking a single row of \(M\); rotate it 90 degrees; can you fit this inside the matrix \(N\)?
- If this is not possible then you cannot multiply the two matrices, as you will not have enough values for the operation.
- Sum the products of each element as you iteratively advance the row of \(M\) and the column of \(N\).
- The number of row-\(M\) and column-\(N\) will determine where to put this sum-value in the new matrix. This can be seen as ‘run-and-dive’.
- Two matrices are only multipliable if the number of columns on \(M\) is the same as the number of rows on \(N\).
Various Operations
Take the following 2x2 matrix: \(A= \begin{bmatrix} a & b\\ c & d \end{bmatrix}\)
-
Determinant: \(ad-bc\)
-
Inverse: \(A^{-1}=\frac{1}{det(A)} \begin{bmatrix} d & -b\\ -c & a \end{bmatrix}\)
Take the following \(3*3\) matrix: \(B= \begin{bmatrix} a & b & c\\ d & e & f\\ g & h & i \end{bmatrix}\)
-
Determinant: Do the strike through method, I won’t describe it here because I already know it. :P
-
Matrix of minors: Replace each element \(a..i\) with the minor for that row and column- basically if you were working out the determinant by expanding along that row.
-
Matrix of cofactors: Multiply the values of the matrix of minors with the signs of the alternating pattern, \([+, -, +, -, ...]\) for the first row; the second row is a leftwards rotation of this pattern, and so are any further rows.
-
Adjoint matrix: This is the transpose of the matrix of cofactors.
-
Inverse:
- \[B^{-1}=\frac{1}{det(B)}adj(A)\]
Elementary Row Operations
If you have the matrix \(B\) obtained from the matrix \(A\) after any of the following operations, the determinant of said matrix is also impacted:
- Scalar multiplication of a row; \(\|B\|=\lambda \|A\|\).
- Interchanging of two rows; \(\|B\|=-\|A\|\).
- Adding a multiple of one row to another; the determinant does not change.
Linear Transformations
-
Verify that \(\mathbf{T}\) is a linear transformation.
- Check that \(\mathbf{T}(0)=0\).
- Check that \(\mathbf{T}(a+b)=\mathbf{T}(a)+\mathbf{T}(b)\).
- Check that \(\mathbf{T}(\lambda a)=\lambda\mathbf{T}(a)\)
-
For the bases \(V, W\) and the linear transformation \(\mathbf{T}\), and a vector \(a\) with respect to \(V\), find the coordinates of \(\mathbf{T}(a)\) with respect to \(W\).
- Find the values of \(\mathbf{T}(v), \forall v\in V\). Represent these as linear combinations of the basis vectors of \(W\).
- For each of the vector results from the above, construct a matrix of \(\mathbf{T}\) by using each as a column vector for the matrix \(M\).
- \(M.(a)=A\) where \(A\) is the column vector of coordinates for your answer.
- This is achieved because we can represent \(\mathbf{T}(a)\) as \(M.(a)\) where \(M\) is the matrix that represents a linear transformation, and \((a)\) is the column vector of coordinates for our input to the transformation.
-
Change the coordinates of vector \(a\) from basis \(V\) to basis \(W\).
-
You can use a transition matrix \(M\) to get from coordinates of \(a\) in \(V\) to coordinates of \(a\) in \(W\).
-
The transition matrix \(M\) is made up of the column vectors of the basis vectors of \(V\) once they have been converted to the basis of \(W\).
-
Once you have this, you can find the coordinates of \(a\) in terms of \(W\) by the following expression: \(\begin{bmatrix} a_{1}^{W} \\ a_{2}^{W} \\ \| \\ a_{n}^{W} \end{bmatrix} = M \begin{bmatrix} a_{1}^{V} \\ a_{2}^{V} \\ \| \\ a_{n}^{V} \end{bmatrix}\)
-
Eigenvalues and Eigenvectors
Eigenvectors, when multiplied by their eigenvalues, will not change direction.
-
Find the eigenvalues of the matrix \(A\).
- Remember that \(A-\lambda I\) is the matrix we are focusing on.
- We solve for \(\lambda\) by solving \(\|A-\lambda I\|=0\). Each solution is an eigenvalue of \(A\).
-
Find the eigenvectors of the matrix \(A\).
-
Find the eigenvalues for \(A\) with the above method.
-
For each eigenvalue of \(A\):
-
Substitute the value of \(\lambda\) into the matrix \(A-\lambda I\).
-
Solve the equation \((A-\lambda_{n}I) \begin{bmatrix} x \\ y \\ z \end{bmatrix}= \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}\)
-
You will find a vector of the form \((x,y,z) = (a\gamma, b\gamma, \gamma)\). Set \(\gamma=1\) and then form a column vector using the right-hand side. Avoid a non-zero vector.
-
This method (finding a kernel) is faster than solving the simultaneous equation.
-
-
-
(Diagonalization) Find a diagonal matrix \(D\) and an invertible matrix \(S\) such that \(S^{-1}AS=D\), where \(A\) is a provided matrix.
- Find the eigenvalues and corresponding eigenvectors of \(A\).
- Each diagonal element of \(D\) is an eigenvalue; each column vector of \(S\) is a corresponding eigenvector.
- This yields the result that \(A=S^{-1}DS\).
Sequences
- Does the sequence \(a(n)\) converge?
- Take subsequences and show that they are not equal to one another
- Suppose that the sequence is bounded, and verify that the bound holds \(\forall n\)
- Show that if \(0 \leq r < 1\) then the sequence \(r^{n}\) converges. By considering the sequence \(r^{n+1}\) deduce that \(r^{n} \rightarrow 0\).and
- Show that \(r^{n}\) is a decreasing sequence which is hence bounded (below)
- Show that \(r^{n+1}=rr^{n} \rightarrow rl\) and hence this property only holds when \(r < 1\) if \(l=0\).
Recurrences
- Find a closed form for the terms of a sequence.
- Form an equation in relation to the terms of the sequence. For example, the Fibonacci sequence can be written as \(F_{n} - F_{n-1} - F_{n-2}=0\).
- Form and solve the auxiliary equation.
- if \(\lambda_{1} \neq \lambda_{2}, x_{n}=A\lambda^{n}_{1} + B\lambda^{n}_{2}\)
- if \(\lambda_{1} = \lambda_{2}, x_{n}=A\lambda^{n}_{1} + Bn\lambda^{n}_{2}\)
- If you have the first terms of a sequence, use this to work out the required terms \(A\) and \(B\).
- Solving a non-homogeneous recurrence
- Use the above to solve the homogeneous form of the equation to find \(h(n)\)
- Then solve the particular solution to find \(p(n)\)
- For a form \(f(n)=an\), try \(x_{n}=Cn+D\). \(x_{n-1}=C(n-1)+D\)
- The solution of the recurrence is given by \(x(n)=h(n)+p(n)\)
Differentiation
- Show that a function \(f\) is differentiable
- Remember that \(f\) is differentiable if the limit \(\lim_{x\rightarrow a}\frac{f(x)-f(a)}{x-a}\), which is equivalent to \(\lim_{h\rightarrow 0}\frac{f(a+h)-f(a)}{h}\) exists.
- You can prove this by showing two sides of an equation must be equal for a function (such as \(f(x)=\|x\|\)) to be differentiable.
- You may also need to use induction on the equation for a function
- Find the partial derivatives of \(f(x,y,z,...)\)
- Simply differentiate the expression whilst holding each of the variables \(x,y,z...\) constant; i.e. only differentiate the variable you are focusing on in each term. Hold the others constant.
- Find the maximum and minimum of a function \(f\) on the interval \([a,b]\) where \(f\) is given by \(f(x)\).
- Find the solutions to \(f'(x)=0\) and hence the stationary points of the function
- Compare the values for \(f(a), f(b), f(x_{1}), f(x_{2})\), and see which of these are the minimum or maximum values
- Sketch the graph of function \(f\) given by \(f(x)\)
- Find the stationary points and the values of \(f\) at each of these.
- Find the nature of each stationary point to determine if they are minima or maxima
- Find the values of \(f(x)=0\) if this is possible
- Determine the behaviour of \(f(x)\) as \(x \rightarrow \pm \infty\)
- Investigate any possible asymptotes
- Show that \(f(x)=\frac{g(x)}{h(x)}\) can be differentiated where \(h'(x)\) is differentiable again with no reduction in the number of terms. You may also choose to use this rule when dealing with \(h(x) \rightarrow 0\).
- Use L’Hopital’s Rule to calculate the limit of \(f'(x)\); if both functions \(g(x)\) and \(h(x)\) are differentiable, then \(f'(x)=\frac{g'(x)}{h'(x)}\). You can repeat this until you gain a value for a limit.
- Find \(\frac{dy}{dx}\) in terms of \(x\) and \(y\) (implicit differentiation).
- Implicit differentiation rules: for a function \(f(x)=g(x)h(y)\), \(\frac{dy}{dx}f(x)=g'(x).(h'(x).\frac{dy}{dx})\).
- Essentially, differentiate \(g(x)h(y)\) using the product/chain rule, and multiply the term \(h'(x)\) by \(\frac{dy}{dx}\).
- Show that \(\frac{d}{dx}f(x)= ...\) where \(f(x)\) is usually a function involving indices
- Represent \(f(x)\) as a form of \(e^{a log b}\) (you will have to use some intuition here) and then differentiate the expression to achieve the expression on the right-hand side.
- Note that for \(a>0, a^{x}=e^{xloga}\). Note the rules for differentiating logarithms and differentiating the exponential function.
Taylor’s Theorem
Quick definitions
The Taylor series can be defined as \(\sum^{\infty}_{n=0}\frac{f^{(n)}{(a)}}{n!}(x-a)^{n}\) Since we use Taylor’s Theorem to calculate an approximation for a function, we will typically have a remainder left over, \(R_{n}(x)\) where \(R_{n}(x)=\frac{f^{(n+1)}(c)}{(n+1)!}(x-a)^{n+1}\) for some number \(c\) between \(a\) and \(x\). The Taylor polynomial refers to the polynomial expression derived after \(n\) many expansions of the Taylor series.
When we pick \(a=0\) for the Taylor series, we arrive at the Maclaurin series which is defined similarly: \(\sum^{\infty}_{n=0}\frac{f^{(n)}{(0)}}{n!}(x)^{n}\) We can still use the above expression for the remainder left by the approximation by substituting a value of \(0\) for \(a\). However, given by the squeeze rule, we will typically find that \(R_{n} \rightarrow 0\) which shows that the Maclaurin series expansion is valid for all \(x \in \mathbf{R}\). Focus on the absolute value for \(R_{n}(x)\).
Finding the estimate of the error
Using the above equations, once we have a polynomial from either the Maclaurin or Taylor series, we just need to pick our value for \(c\); this is always a value between \(x\) (the value we have been given) and \(a\) , which is set to 0 by definition for the Maclaurin series. You may need to do some rearranging to ensure that your value for \(c\) is correct.
You should aim for \(f^{(n+1)}(c)\) to be as large as possible to get the largest possible absolute value for \(R_{n}(x)\). Don’t feel afraid to make arbitrary guesses for \(c\) to ensure this.
First Order ODEs
- Find the general solution of \(\frac{dy}{dx}=f(x)g(y)=0\).
- This is a homogeneous separable equation and is solved by taking integrals.
- Get \(dy, g(y)\) onto one side of the equation and hence \(dx, f(x)\) onto the other side of the equation. You will typically have a fraction of the form \(\frac{1}{h(y)}\) and \(\frac{1}{k(x)}\).
- Once you have taken integrals of both sides you will end up with \(log(h(y))=m(x)+c\) ; raise each to the power of \(e\) and then simplify the expression.
- You may choose to introduce a variable \(vx=y\) to help simplify an integral, as this will lead to a separable equation.
- Remember to substitute the value of \(v\) back into the expression.
- Solve the first order linear equation \(\frac{dy}{dx} + P(x)y = Q(x)\). FOCUS ON THIS!!!!!!!
- Find the integrating factor \(I(x)=e^{P(x)dx}\), such that \(yI(x)=1\). Hence the derivative \(\frac{d}{dx}I(x)=I(x)P(x)\).
- Multiply both sides of the equation by the integrating factor \(I(x)\).
- Continue to solve the equation as you would using the above steps.
Second Order ODEs
-
Solve the homogeneous equation (find the general solution) of \(ay'' + by' + cy = 0\).
- Form the auxiliary equation \(a\lambda^{2} + b\lambda+c=0\) and solve for \(\lambda\).
- Real and distinct roots; the general solution is \(y=Ae^{\lambda_{1}x}+Be^{\lambda_{2}x}\).
- Repeated roots; the general solution is \(y=(A+Bx)e^{\lambda x}\).
- Complex roots; the general solution is of the form \(y=e^{\alpha x}(A\cos\beta x+B\sin\beta x)\), where \(\lambda_{1}=\alpha+i\beta\) and \(\lambda_{2}=\alpha-i\beta\) (the complex conjugate). Think: \(e^{real}(A\cos(imaginary .x) + B\sin(imaginary.x))\).
- Form the auxiliary equation \(a\lambda^{2} + b\lambda+c=0\) and solve for \(\lambda\).
-
Solve the equation \(ay'' + by' + cy = f(x)\), and for bonus points, solve for when \(y(m)=n\) and \(y'(k)=l\).
-
First, solve the homogeneous equation using the above steps, to find \(H(x)\).
-
Next, find the particular solution using the following table (a solution for the equation \(y=P(x)\):
-
\(f(x)\) form of particular solution for y \(e^{\alpha x}\) \(y=Ae^{\alpha x}\) if \(\alpha\) is not a root of the auxiliary equation
\(y=Axe^{\alpha x}\) if \(\alpha\) is a non-repeated root
\(y=Ax^{2}e^{\alpha x}\) if \(\alpha\) is a repeated rootpolynomial of degree \(n\) polynomial of degree \(n\) if 0 is not a root of the auxiliary equation
polynomial of degree \(n+1\) if 0 is a non-repeated root
polynomial of degree \(n+2\) if 0 is a repeated root
e.g. if you have \(f(x)=5x\) then you can try \(y=Cx+D\) which has degree 1.\(A\cos\alpha x + B\sin\alpha x\) REMEMBER TO SUBSTITUTE A VALUE FOR \(\alpha\)
\(y=C\cos\alpha x+D\sin\alpha x\) if \(i\alpha\) is not a root of the auxiliary equation
\(y=x(C\cos\alpha x+D\sin\alpha x)\) otherwise -
Solve \(P(x)\) to find out the values of the coefficients
-
Once you have an equation for \(y\), then you can calculate \(y'\) and \(y''\) and substitute these back into the equation at the start. This is because the solution \(y=H(x)+P(x)\)
- To work out a particular solution, then substitute the values for \(y\) and \(y''\) etc. to find out the values of \(A, B\) from your homogeneous equation.
-