Modified Eigenvalue Problems
Suppose we know an eigen decomposition of a real symmetric matrix of order , and we want to find an eigen decomposition of the rank-one modification , where . The problem was first discussed systematically by Golub [1973]. Also see Bunch, Nielsen, and Sorensen [1978] for a more detailed treatment and implmentation.
Eigen-pairs of must satisfy Change variables to and define . For the time being suppose all elements of are non-zero and all elements of are different, with .
We must solve which we can also write as
Suppose is a solution with . Then and because all are different must be a vector with a single element, say , not equal to zero. But then , which is non-zero. Thus is non-zero at a solution, and because eigenvectors are determined up to a scalar factor we may as well require .
Now solve At a solution we must have , because otherwise would be zero. Thus and we can find by solving If we define then we must solve . Let's first look at a particular .
We have for all , and There are vertical asymptotes at all , and between and the function increases from to . For the function increases from 0 to and for it increases from to 0. Thus the equation has one solution in each of the open intervals between the . If it has an additional solution smaller than and if it has a solution larger than . If then and if then Finding the actual eigenvalues in their intervals can be done with any root-finding method. Of course some will be better than other for solving this particular problem. See Melman [1995], [1997] [1998] for suggestions and comparisons.
We still have to deal with the assumptions that the elements of are non-zero and that all are different. Suppose elements of are zero, without loss of generality it can be the last . Partition and accordingly. Then we need to solve the modified eigen-problem for But this is a direct sum of smaller matrices and the eigenvalues problems for and can be solved separately.
If not all are different we can partitioning the matrix into blocks corresponding with the, say, different eigenvalues. Now use the matrices which are square orthonormal of order , and have their first column equal to Form the direct sum of the and compute . This gives with the unit vectors, i.e. vectors that are zero except for element that is one.
A row and column permutation makes the matrix a direct sum of the diagonal matrices of order and the matrix This last matrix satisfies our assumptions of different diagonal elements and nonzero off-diagonal elements, and consequently can be analyzed by using our previous results.
A very similar analysis is possible for modfied singular value decomposition, for which we refer to Bunch and Nielsen [1978].