In numerical analysis, inverse iteration (also known as the inverse power method) is an iterative eigenvalue algorithm. It allows one to find an approximate eigenvector when an approximation to a corresponding eigenvalue is already known. The method is conceptually similar to the power method. It appears to have originally been developed to compute resonance frequencies in the field of structural mechanics.
The inverse power iteration algorithm starts with an approximation μ {\displaystyle \mu } for the eigenvalue corresponding to the desired eigenvector and a vector b 0 {\displaystyle b_{0}} , either a randomly selected vector or an approximation to the eigenvector. The method is described by the iteration
b k + 1 = ( A − μ I ) − 1 b k C k , {\displaystyle b_{k+1}={\frac {(A-\mu I)^{-1}b_{k}}{C_{k}}},}
where C k {\displaystyle C_{k}} are some constants usually chosen as C k = ‖ ( A − μ I ) − 1 b k ‖ . {\displaystyle C_{k}=\|(A-\mu I)^{-1}b_{k}\|.} Since eigenvectors are defined up to multiplication by constant, the choice of C k {\displaystyle C_{k}} can be arbitrary in theory; practical aspects of the choice of C k {\displaystyle C_{k}} are discussed below.
At every iteration, the vector b k {\displaystyle b_{k}} is multiplied by the matrix ( A − μ I ) − 1 {\displaystyle (A-\mu I)^{-1}} and normalized. It is exactly the same formula as in the power method, except replacing the matrix A {\displaystyle A} by ( A − μ I ) − 1 . {\displaystyle (A-\mu I)^{-1}.} The closer the approximation μ {\displaystyle \mu } to the eigenvalue is chosen, the faster the algorithm converges; however, incorrect choice of μ {\displaystyle \mu } can lead to slow convergence or to the convergence to an eigenvector other than the one desired. In practice, the method is used when a good approximation for the eigenvalue is known, and hence one needs only few (quite often just one) iterations.