#linear #algebra #maths #public # Summary If we have some matrix $\mathbf{A}$, the eigenvalues of $\mathbf{A}$ are scalar values $\lambda\in\mathbb{C}$ that satisfy the "[[#The eigenvector equation|eigenvector equation]]" $\mathbf{Ax}=\lambda \mathbf{x}$ and for each value of $\lambda$, the associated eigenvector is some non-zero vector $\mathbf{x}$. Notice that eigen*values* can be zero, but the zero vector is not considered an eigenvector (or it would need to be an eigenvector of every matrix). Eigenvectors of $\mathbf{A}$ are vectors which do not change direction under the linear transformation specified by $\mathbf{A}$, and if $\begin{pmatrix*}a \\ b\end{pmatrix*}$ is an eigenvector then all scalar multiples $\gamma \begin{pmatrix*}a \\ b\end{pmatrix*}, \gamma \ne 0$ of that eigenvector are eigenvectors also. Eigenvectors have to be non-zero because firstly the origin is always invariant under any linear transformation, so if we permit the zero vector to be an eigenvector, it is an eigenvector for any matrix and secondly $\mathbf{x}=\mathbf{0} \implies \mathbf{Ax}=\lambda \mathbf{x}\ \forall \mathbf{A},\lambda$so if we allow the zero eigenvector then any value is an eigenvalue of any matrix, which would make eigenvalues a bit less eigenvaluable (I can't really believe I wrote that but now that it’s there it's staying in). In the absence of something like mathematica, eigenvalues are found by finding the roots of the characteristic polynomial $p(\lambda)$, which itself is derived from an important observation related to the eigenvector equation, which is that we can rearrange the eigenvector equation to $(\mathbf{A} - \lambda \mathbb{I})\mathbf{x} = \mathbf{0}$, where $\mathbb{I}$ is the identity matrix. (This derivation is given [[#Alternative form of the eigenvector equation|below]]). Having done that, we realise that for the right hand side to be $\mathbf{0}$, either $(\mathbf{A} - \lambda \mathbb{I})$ must be a singular matrix or $\mathbf{x}=0$. So we seek values of $\lambda$ such that $\det (\mathbf{A} - \lambda \mathbb{I})=0$ These are the roots of the [[#Characteristic polynomial|characteristic polynomial]]. # The eigenvector equation The most common way to write the eigenvector equation is in column vector form $\mathbf{Ax} = \lambda \mathbf{x} \tag{1}$ where - $\mathbf{A}$ is some square matrix - $\lambda \in \mathbb{C}$ is a scalar, known as an eigenvalue - $\mathbf{x}$ is a vector, known as an eigenvector associated with the eigenvalue $\lambda$. The eigenvectors of a matrix associated with a particular eigenvalue lie on a line through the origin and any vector from the origin to a point on that line is an eigenvector. A ($m \times m$) matrix will have $m$ eigenvalues although eigenvalues can be repeated. Even if a matrix is a real-valued matrix it may have complex eigenvalues and complex-valued eigenvectors. Since eigenvalues are roots of the characteristic polynomial, it follows from the fundamental theorem of algebra that the characteristic polynomial of a $2\times 2$ matrix must be a quadratic and in general that of an $m \times m$ matrix must be of order $m$. ## Alternative form of the eigenvector equation In general since we want to find values of $\lambda$ such that $A\mathbf{x} = \lambda \mathbf{x}$ for some non-zero $\mathbf{x}$, a useful equivalent form of (a) to find the characteristic polynomial is $(\mathbf{A} - \lambda \mathbb{I})\mathbf{x} = \mathbf{0}. \tag{2}$ where $\mathbb{I}$ is the identity matrix. Notice the right hand side is the zero vector. This is derived as follows $ \begin{align*} \mathbf{Ax} &= \lambda \mathbf{x}\\ \mathbf{Ax} &= \lambda \mathbb{I} \mathbf{x} \tag{3}\\ \mathbf{Ax} - \lambda \mathbb{I}\mathbf{x} &= \mathbf{0}\\ (\mathbf{A} - \lambda \mathbb{I})\mathbf{x} &= \mathbf{0} \end{align*} $ Both $A\mathbf{x} = \lambda \mathbf{x}$ and $(\mathbf{A} - \lambda \mathbf{I})\mathbf{x} = \mathbf{0}$ are referred to as the eigenvector equation. In form (2), the identity matrix $\mathbb{I}$ is introduced at step (3) to allow the subsequent factorization, since $\lambda \mathbf{x}$ is a vector whereas $\mathbf{A}x$ is a matrix so they cannot both be divided by $\mathbf{x}$ directly . $\lambda \mathbb{I}$ is a square matrix with the same dimensionality as $\mathbf{A}$, solving this problem. ## Matrix form of the eigenvector equation An alternative way of thinking about eigenvectors and eigenvalues is given by Steven Brunton ![](https://www.youtube.com/watch?v=QYS-ML_vn4k) In this video he motivates eigenvalues and eigenvectors entirely from the idea of solving systems of differential equations. Suppose we have some system of linear differential equations in matrix form $\dot{\mathbf{x}} = \mathbf{Ax}$. It is clearly going to be simpler to solve a _decoupled_ system than one which is coupled. In matrix form a decoupled system will have a coefficient matrix that is diagonal. So, can we find a matrix $\mathbf{T}$ that represents a change in coordinates for our problem so that $\mathbf{x} = \mathbf{Tz}$ (or equivalently $\mathbf{z} = \mathbf{T}^{-1}\mathbf{x}$) and $\dot{\mathbf{z}} = \mathbf{Dz}$? It turns out we often can, and $\mathbf{T}$ is the matrix with the eigenvectors of $\mathbf{A}$ as column vectors, and $\mathbf{D}$ is the diagonal matrix such that each eigenvalue $\lambda$ is an entry on the diagonal in the column corresponding to its eigenvalue in $\mathbf{D}$. Then the solution to the system is $\mathbf{x}(t) = \mathbf{T}\mathrm{e}^{\mathbf{D}t}\mathbf{T}^{-1}\mathbf{x}(0)$. Where $\mathbf{T}$ and $\mathbf{D}$ are as before, $\mathbf{x}(0)$ is the initial condition as a column vector and $\mathrm{e}^{\mathbf{D}t}$ is a _matrix exponential_. Computing a matrix exponential involves taking the Taylor series expansion of $\mathrm{e}^{x}$ and plugging in the matrix expression for $x$. This is hard in the general case, but in the specific case of a diagonal matrix this is very simple because of an important property of an eigendecomposition, which is if we can diagonalize some matrix $\mathbf{A}$ as $\mathbf{A} = \mathbf{TDT}^{-1}$ , then $\mathbf{A}^n = \mathbf{TD}^n\mathbf{T}^{-1}$, which means that if the eigenvalues of $\mathbf{A}$ are $\lambda_1, \lambda_2,\ldots,\lambda_n$, then $ \mathrm{e}^{\mathbf{D}t} = \begin{pmatrix} \mathrm{e}^{\lambda_1t} & 0 & \ldots & \ldots \\ 0 & \mathrm{e}^{\lambda_2t} & 0 & \ldots\\ \\ \vdots & & \ddots\\ 0 & \ldots & &\mathrm{e}^{\lambda_nt} \\ \end{pmatrix} $ So, just a diagonal matrix with $\mathrm{e}^{\lambda_1t}$ etc on the diagonal. And multiplying by the initial condition gives us a form equivalent to what we would expect if solving these equations without using matrix magic. TODO: write this up and show why 1. The matrix exponential 2. the eigendecomposition powers thing (how the terms cancel) # Characteristic polynomial Eigenvalues are the roots of the "characteristic polynomial" $p(\lambda)$, which finds where $\det (\mathbf{A} - \lambda \mathbb{I}) = \mathbf{0}$. ## Derivation of the characteristic polynomial Since we seek to find where $\det (\mathbf{A} - \lambda \mathbb{I}) = \mathbf{0}$, let us first consider the matrix $\mathbf{A} - \lambda \mathbb{I}$, and for simplicity we will use the 2x2 case. $ \begin{align*} \text{Let}~\mathbf{A} &= \begin{pmatrix*}a & b\\c & d\end{pmatrix*}\\ \mathbb{I} &= \begin{pmatrix*}1 & 0\\0 & 1\end{pmatrix*},\\ \text{so}~\lambda\mathbb{I} &=\begin{pmatrix*}\lambda & 0\\0 & \lambda\end{pmatrix*}.\\[8px] \text{Finally}~\mathbf{A} - \lambda \mathbb{I} &= \begin{pmatrix*}a-\lambda & b\\c & d-\lambda\end{pmatrix*}\\[8px] \text{thus}~p(\lambda) &= \det (\mathbf{A} - \lambda \mathbb{I})\\ &= (a-\lambda)(d-\lambda)-bc \tag{4} \end{align*} $ ## Shortcut for the characteristic polynomial of a 2x2 matrix A shortcut to write the characteristic polynomial of a 2x2 matrix $\mathbf{A}$ is $p(\lambda) : \lambda^2 -(\mathop{\mathrm{tr}}\mathbf{A})\lambda + \det \mathbf{A} = 0 \tag{5}$ Where $\mathop{\mathrm{tr}}\mathbf{A}$ is the "trace" of $\mathbf{A}$ (ie the sum of elements on the diagonal). This is derived as follows: $ \begin{align*} \text{Let}~\mathbf{A} &= \begin{pmatrix*}a & b\\c & d\end{pmatrix*}\\ \det \mathbf{A} &= ad - bc\\ \mathop{\mathrm{tr}} \mathbf{A} &= a + d\\ p(\lambda) &= (a-\lambda)(d-\lambda)-bc&\text{from (4) above}\\ &= ad -d\lambda -a\lambda + \lambda^2 -bc\\ &= \lambda^2-d\lambda - a\lambda+ad -bc\\ &= \lambda^2-(a +d)\lambda+ad -bc\\ &= \lambda^2-(\mathop{\mathrm{tr}} \mathbf{A})\lambda + \det \mathbf{A} \end{align*} $ # Finding Eigenvalues To find the eigenvalues we want to find the roots of the characteristic polynomial. We can see that if $\mathbf{x} \ne \mathbf{0}$ this can only be true if $\det (\mathbf{A} - \lambda \mathbf{I}) = \mathbf{0}$, which leads to the characteristic polynomial $p(\lambda)$, which is the determinant of $(\mathbf{A} - \lambda \mathbf{I})$. note: The sum of the eigenvalues is equal to the trace of $\mathbf{A}$, and the product of the eigenvalues is equal to $\det \mathbf{A}$. ## Worked example Find the eigenvalues of the matrix $\begin{pmatrix*}1 & 3 \\ 4 & 2\end{pmatrix*} $ We tackle this as follows $ \begin{align*} \text{Let}~\mathbf{A} &= \begin{pmatrix*}1 & 3 \\ 4 & 2\end{pmatrix*}\\ \det \mathbf{A} &= 2 - 12 = -10\\ \mathop{\mathrm{tr}} \mathbf{A} &= 3\\ \text{Thus}~p(\lambda) &= \lambda^2 - 3\lambda -10\\ &= (\lambda -5)(\lambda + 2) \tag{6} \end{align*} $ So the eigenvalues of the matrix $\begin{pmatrix*}1 & 3 \\ 4 & 2\end{pmatrix*}$are 5 and -2. # Finding Eigenvectors The normal way of finding eigenvectors is to convert the eigenvector equation into a series of simultaneous equations and solve these for the invariant line. The eigenvector is any position vector on this line. ## Worked example More common than the example above is being asked to find the eigenvalues and the associated eigenvectors. For example, find the eigenvalues of the matrix $\begin{pmatrix*}1 & 3 \\ 4 & 2\end{pmatrix*}$ and for each eigenvalue, find the associated eigenvector. We found the eigenvalues above in (6) so lets assume that work and start with the eigenvalues $\lambda_1 = 5$ and $\lambda_2 = -2$. Recall that the eigenvectors are values of $\mathbf{x}$ that satisfy the eigenvector equation, so $ \begin{align*} (\mathbf{A} - \lambda\mathbb{I})\mathbf{x} &= \mathbf{0}\\ \mathbf{A} - \lambda\mathbb{I} &= \begin{pmatrix*} 1-\lambda & 3 \\ 4 & 2-\lambda \end{pmatrix*}\\[8px] \text{Let}~\mathbf{x} &= \begin{pmatrix*}x\\y\end{pmatrix*}\\ \text{then}~\begin{pmatrix*} 1-\lambda & 3 \\ 4 & 2-\lambda \end{pmatrix*}\begin{pmatrix*}x\\y\end{pmatrix*} &= \begin{pmatrix*}0\\0\end{pmatrix*}\\[8px] \begin{pmatrix*}(1-\lambda)x+3y\\4x+(2-\lambda)y\end{pmatrix*} &= \begin{pmatrix*}0\\0\end{pmatrix*} \end{align*} $ Or to write this more conventionally as a set of simultaneous equations, $ \begin{align*} (1-\lambda)x+3y &= 0 \tag{7}\\ 4x+(2-\lambda)y &= 0 \tag{8}\\ \end{align*} $ These we will use to find the equations for $x$ and $y$ for each eigenvalue $\lambda$. First, for $\lambda_1=5$, this gives $ \begin{align*} (1-5)x + 3y &= 0&\text{for (7)}\\ -4x + 3y &= 0\\ 3y &= 4x \end{align*} $ So $\begin{pmatrix*} 3 \\ 4\end{pmatrix*}$ is an eigenvector associated with the eigenvalue $\lambda_1=5$. To check this is an eigenvector, recall that it must satisfy $\mathbf{Ax} = \lambda \mathbf{x}$, so $ \begin{align*} \mathbf{Ax} &= \begin{pmatrix*}1 & 3 \\ 4 & 2\end{pmatrix*}\begin{pmatrix*}3\\4\end{pmatrix*}\\ &= \begin{pmatrix*}15\\20\end{pmatrix*}\\ \lambda\mathbf{x} &= 5\begin{pmatrix*}3\\4\end{pmatrix*}\\ &= \begin{pmatrix*}15\\20\end{pmatrix*}\\ \text{Thus}~\mathbf{Ax} &=\lambda\mathbf{x} \end{align*} $ ...and therefore $\begin{pmatrix*} 3 \\ 4\end{pmatrix*}$ is an eigenvector associated with the eigenvalue $\lambda_1=5$. Notice that we know it is not possible to solve the system in (7) and (8) to get specific values for $(x,y)$ because eigenvectors form a line. Let's try anyway to see what happens $ \begin{align*} 4x+(2-5)y &= 0 &\text{for (8)}\\ 4x-3y &= 0\\ -3x &= -4x\\ \end{align*} $ ...giving us the exact same eigenvector as before. It is logical that since we found $\lambda$ by solving for where the determinant is zero that we have a singular system of simultaneous equations, so the two equations are linear multiples of each other for any given value of $\lambda$. Now, for $\lambda_2=-2$, we have $ \begin{align*} (1-(-2))x + 3y &= 0&\text{for (7)}\\ 3x+3y &= 0\\ y &= -x\\ \text{So}~\mathbf{x} &= \begin{pmatrix*}1\\-1\end{pmatrix*}~\text{must be an eigenvector of }\mathbf{A}~\text{associated with }\lambda = -2.\\ \text{Check:}~\mathbf{Ax} &=\begin{pmatrix*}1 & 3 \\ 4 & 2\end{pmatrix*}\begin{pmatrix*}1\\-1\end{pmatrix*}\\ &= \begin{pmatrix*}-2\\2\end{pmatrix*}\\ \lambda\mathbf{x} &= -2 \begin{pmatrix*}1\\-1\end{pmatrix*}\\ &= \begin{pmatrix*}-2\\2\end{pmatrix*}\\ \text{So}~\mathbf{Ax} &= \lambda\mathbf{x}~\text{as expected.} \end{align*} $ ...and therefore $\begin{pmatrix*}1\\-1\end{pmatrix*}$ is an eigenvector of $\mathbf{A}$ associated with $\lambda = -2$. ### Technique 1. find $\mathbf{A}-\mathbb{I}\lambda$. (You can do this by just taking $\mathbf{A}$ and writing "$-\lambdaquot; next to each value on the leading diagonal) 2. Find the roots of $p(\lambda) = \det(\mathbf{A}-\mathbb{I}\lambda)$. These are the eigenvalues of $\mathbf{A}$. 3. Write the simultaneous equations derived from $(\mathbf{A}-\mathbb{I}\lambda)\mathbf{x} = \mathbf{0}$ ie (7) and (8) above. 4. Plug in each eigenvalue into one of these equations and solve it for $y$ as a function of x. 5. Write down the eigenvector. If your solution looks like $ay = bx$ the eigenvector will be $\begin{pmatrix*}a\\b\end{pmatrix*}$. # Nature of Eigenvalues In many cases, the nature of eigenvalues is important (for example to classify the nature of an equilibrium point in a system of differential equations you look at the nature of the eigenvalues of the [[Gradient#The Jacobian|Jacobian matrix]] of the system at that point). So it's worth noticing that for an $m\times m$ matrix, the characteristic polynomial will be of order $m$ and therefore the normal patterns of roots for an order $m$ polynomial apply. That is specifically for a $2\times 2$ matrix the eigenvalues can fall into the following categories - Real and distinct - Real and repeated - Complex conjugate In the case of real eigenvalues, for some problems the sign matters (eg in the equilibrium point example, if both eigenvalues are positive the point is a source, if both are negative it is a sink and if one is positive and the other negative it is a saddle point). # Eigenvalues and Eigenvectors of common special matrices ## Triangular ### Eigenvalues The eigenvalues of any triangular matrix are simply the values on the diagonal. So an upper triangular matrix $\begin{pmatrix*}a & b\\0 & c\end{pmatrix*}$ will have eigenvalues $a$ and $c$. ### Eigenvectors The eigenvector associated with the eigenvalue with a zero in the column above or below it will lie on that axis. So in the case above, $\begin{pmatrix*}1\\0\end{pmatrix*}$ is an eigenvector associated with the eigenvalue $\lambda =a$. The other eigenvector will need to be found in the usual way. ![[eigenvalues_1.png]] ## Diagonal Diagonal matrices have eigenvalues on the diagonal and all eigenvectors being on the axes. So the eigenvalues of $\begin{pmatrix*}a & 0\\0 & b\end{pmatrix*}$ are $a$ and $b$ and the associated eigenvectors are $\begin{pmatrix*}1\\0\end{pmatrix*}$ and $\begin{pmatrix*}0\\1\end{pmatrix*}$ respectively. ### $k,l$ dilation As you can see, from the perspective of linear transformation, a diagonal matrix is simply a dilation. So the eigenvalues of a $k,l$ dilation matrix are $k$ and $l$ and the associated eigenvectors are $\begin{pmatrix*}1\\0\end{pmatrix*}$ and $\begin{pmatrix*}0\\1\end{pmatrix*}$ respectively.. Here's what that looks like in mathematica ![[eigenvalues_2.png]] ## Shear The matrices $\begin{pmatrix*}1 & k\\0 & 1\end{pmatrix*}$ and $\begin{pmatrix*}1 & 0\\k & 1\end{pmatrix*}$ with $k\ne 0$ represent a horizontal and vertical shear respectively by a factor of $k$. These are special cases of triangular matrices which fix every point on one axis and move every other point horizontally and vertically by $k$ (respectively). As such that fixed axis is the eigenvector and 1 is a repeated eigenvalue. ### Horizontal A horizontal shear matrix fixes every point on the $x$-axis and moves every other point by $k$ horizontally. The `ShearingMatrix` function in mathematica creates a shear by some arbitrary angle in the direction of one vector and normal to another vector. So to create a conventional horizontal shear you will want these to be $\hat{\mathbf{i}}$ and $\hat{\mathbf{j}}$ respectively as follows. Shears are not hard to figure out so you may feel the juice is not worth the squeeze as far as the `ShearingMatrix` function at least in 2-D. ![[eigenvalues_3.png]] As this is a triangular matrix, it's clear that 1 is a repeated eigenvalue, and that since the transformation fixes points on the $x$-axis, the eigenvector associated with 1 will lie on the x-axis, so $\begin{pmatrix*}1\\0\end{pmatrix*}$ will be an associated eigenvector. ### Vertical As you might intuitively expect, to get a vertical shear you call `ShearingMatrix` with $\hat{\mathbf{i}}$ and $\hat{\mathbf{j}}$ the other way around. ![[eigenvalues_4.png]] While 1 is still a repeated eigenvalue, this time the invariant points lie on the $y$-axis, so the eigenvector is $\begin{pmatrix*}0\\1\end{pmatrix*}$, representing $x=0$. ## Rotation With the exception of rotation through 0 and $\pi$, which are represented by $\mathbb{I}$ and $-\mathbb{I}$ respectively, rotation matrices have no real eigenvalues or eigenvectors, which can be seen from the fact that there are no lines in the real plane which are invariant under rotation by any angle except full and half-rotations. Let's take the example of a rotation by 45°/$\frac{\pi}{4}$. $ \begin{align*} \text{Let}~\mathbf{A}&= \begin{pmatrix} \cos \frac{\pi}{4} & -\sin\frac{\pi}{4}\\ \sin\frac{\pi}{4} & \cos \frac{\pi}{4} \end{pmatrix}\\ &=\begin{pmatrix} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &\frac{1}{\sqrt{2}} \end{pmatrix}\\ \mathop{\mathrm{tr}}\mathbf{A} &= \frac{2}{\sqrt{2}} = \sqrt{2}\\ \det \mathbf{A} &= 1\\ \text{Thus}~p(\lambda)&= \lambda^2 - \sqrt{2}\lambda + 1\\ \lambda &= \frac{\sqrt{2}\pm\sqrt{2-4}}{2}\\ &= \frac{\sqrt{2}\pm\sqrt{-2}}{2}\\ &= \frac{1\pm i}{\sqrt{2}}\\[8px] (\mathbf{A}-\mathbb{I}\lambda)\mathbf{x} &= \begin{pmatrix} \frac{1}{\sqrt{2}}-\lambda & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &\frac{1}{\sqrt{2}}-\lambda \end{pmatrix}\begin{pmatrix} x\\y \end{pmatrix}\\ &= \begin{pmatrix} (\frac{1}{\sqrt{2}}-\lambda)x -\frac{1}{\sqrt{2}}y \\ \frac{1}{\sqrt{2}}x +(\frac{1}{\sqrt{2}}-\lambda)y \end{pmatrix}\\ \text{So, since}~(\mathbf{A}-\mathbb{I}\lambda)\mathbf{x} &= \mathbf{0},~\text{we have}\\ \left(\frac{1}{\sqrt{2}}-\lambda\right)x -\frac{1}{\sqrt{2}}y &= 0,\tag{9}\\ \text{and}~\frac{1}{\sqrt{2}}x +\left(\frac{1}{\sqrt{2}}-\lambda\right)y&=0.\\[8px] \end{align*} $ Then, substituting $\lambda=\frac{1+i}{\sqrt{2}}$ into (9), we have $ \begin{align*} \left(\frac{1}{\sqrt{2}}-\frac{1+i}{\sqrt{2}}\right)x -\frac{1}{\sqrt{2}}y &= 0\\ -\frac{i}{\sqrt{2}}x -\frac{1}{\sqrt{2}}y &= 0\\ y&=-ix\\ iy&=x\\ \end{align*} $ So $\begin{pmatrix}i\\1\end{pmatrix}$ is an eigenvector associated with the eigenvalue $\lambda=\frac{1+i}{\sqrt{2}}$, and similarly we would find that $\begin{pmatrix}-i\\1\end{pmatrix}$ is an eigenvector associated with the eigenvalue $\lambda=\frac{1-i}{\sqrt{2}}$. ## Reflections Recall that reflections are represented by matrices of the form $ \begin{align*} \text{Let}~\mathbf{A}&=\begin{pmatrix} \cos  2\alpha & \sin 2\alpha\\ \sin 2\alpha &  -\cos 2\alpha \end{pmatrix}\\ \mathop{\mathrm{tr}}\mathbf{A}&= \cos 2\alpha-\cos 2\alpha = 0\\ \det \mathbf{A}&=-\cos^2 2\alpha - \sin^2 2\alpha\\ &= -1(\cos^2 2\alpha + \sin^2 2\alpha)\\ &= -1&\text{by the Pythagorean identity} \end{align*} $ So since the characteristic polynomial for a 2x2 matrix is  $\lambda^2 -(\mathop{\mathrm{tr}}\mathbf{A})\lambda + \det \mathbf{A}$, we have $ \begin{align*} \lambda^2 -1 &= 0\\ (\lambda +1)(\lambda-1)&= 0\\ \text{and thus}~\lambda &= \pm 1 \end{align*} $ So as can be seen, irrespective of the angle of incidence, the eigenvalues of a reflection matrix are 1 and -1. The eigenvectors will be along and normal to the axis of reflection. ## Flattening If a matrix has a determinant of zero it represents either a flattening (the image set of its linear transformation is a line) or it collapses all of the domain into a point at the origin. Since we know that the determinant of a matrix that represents a flattening is zero, we can see by inspection what its eigenvalues are. Since we know from (5) that for a 2x2 matrix $p(\lambda) = \lambda^2 -(\mathop{\mathrm{tr}}\mathbf{A})\lambda + \det \mathbf{A}$, and we know that $\det \mathbf{A}=0$, we have $ \begin{align*} p(\lambda)&= \lambda^2 -(\mathop{\mathrm{tr}}\mathbf{A})\lambda + 0=0\\ \lambda(\lambda-\mathop{\mathrm{tr}}\mathbf{A}) &= 0 \end{align*} $ Therefore either $\lambda=0$ or $\lambda = \mathop{\mathrm{tr}}\mathbf{A}$. So the eigenvalues of any matrix that represents a flattening are zero and the trace of the matrix (which might also be 0 in which case 0 is a repeated eigenvalue). Both of the columns of matrix will be eigenvectors, but in cases where these are linear multiples of each other you will need to find the other eigenvector if the trace of the matrix is not zero(which would mean the eigenvalues and eigenvectors would be repeated). ### Worked example TODO $ \begin{align*} \begin{pmatrix} 1 & 2 \\ -2 & -4 \end{pmatrix} \end{align*} $ # 3x3 (and $m\times m$) triangular and diagonal matrices These work as you would expect. The eigenvalues are on the leading diagonal and if there is zero in all the dimensions apart from the value on the diagonal, then the eigenvectors associated with that eigenvalue lie on that axis.