Eigenvalues and Eigenvectors of Asymmetric Matrices

If is a square but asymmetric real matrix the eigenvector-eigenvalue situation becomes quite different from the symmetric case. We gave a variational treatment of the symmetric case, using the connection between eigenvalue problems and quadratic forms (or ellipses and other conic sections, if you have a geometric mind).That connection, howver, is lost in the asymmetric case, and there is no obvious variational problem associated with eigenvalues and eigenvectors.

Let us first define eigenvalues and eigenvectors in the asymmetric case. As before, an eigen-pair is a solution to the equation with . This can also be written as , which shows that the eigenvalues are the solutions of the equation . Now the function is the characteristic polynomial of . It is a polynomial of degree , and by the fundamental theorem of algebra there are real and complex roots, counting multiplicities. Thus has eigenvalues, as before, although some of them can be complex

A first indication that something may be wrong, or least fundamentally different, is the matrix The characteristic equation has the root , with multiplicity 2. Thus an eigenvector should satisfy which merely says . Thus does not have two linearly independent, let alone orthogonal, eigenvectors.

A second problem is illustrated by the anti-symmetric matrix for which the characteristic polynomial is . The characteristic equations has the two complex roots and . The corresponding eigenvectors are the columns of Thus both eigenvalues and eigenvectors may be complex. In fact if we take complex conjugates on both sides of , and remember that is real, we see that Thus is an eigen-pair if and only if is. If is real and of odd order it always has at least one real eigenvalue. If an eigenvalue is real and of multiplicity , then there are corresponding real and linearly independent eigenvectors. They are simply a basis for the null space of .

A third problem, which by definition did not come up in the symmetric case, is that we now have an eigen problem for both and its transpose . Since for all we have it follows that and have the same eigenvalues. We say that is a right eigen-pair of if , and is a left eigen-pair of if , which is of course the same as .

A matrix is diagonalizable if there exists a non-singular such that , with diagonal. Instead of the spectral decomposition of symmetric matrices we have the decomposition or . A matrix that is not diagonalizable is called defective.

Result: A matrix is diagonalizable if and only if it has linearly independent right eigenvectors if and only if it has linearly independent left eigenvectors. We show this for right eigenvectors. Collect them in the columns of a matrix . Thus , with non-singular. This implies , and thus the rows of are linearly independent left eigenvalues. Also . Conversely if then and , so we have linearly independent left and right eigenvectors.

Result: If the eigenvalues of are all diferent then the eigenvectors are linearly independent. We show this by contradiction. Select a maximally linearly independent subset from the . Suppose there are , so the eigenvectors are linearly dependent. Without loss of generality the maximally linearly independent subset can be taken as the first . Then for all there exist such that Premultiply with to get Premultiply by to get Subtract from to get which implies that because the are linearly independent. Since the eigenvalues are unequal, this implies and thus for all , contradicting that the are eigenvectors. Thus and the are linearly independent.

Note 030615 Add small amount on defective matrices. Add stuff on characteristic and minimal polynomials. Take about using the SVD instead.