Cyclic Tridiagonal Systems
Sherman-Morrison Formula
Suppose that you have already obtained, by herculean effort, the inverse matrix \(\mathbf{A}^{−1}\) of a square matrix \(\mathbf{A}\). Now you want to make a “small” change in \(\mathbf{A}\), for example change one element \(a_{ij}\), or a few elements, or one row, or one column. Is there any way of calculating the corresponding change in \(\mathbf{A}^{−1}\) without repeating your difficult labors? Yes, if your change is of the form
for some vectors \(\mathbf{u}\) and \(\mathbf{v}\). If \(\mathbf{u}\) is a unit vector \(\mathbf{e}_{i}\), then (2.7.1) adds the components of \(\mathbf{v}\) to the \(i\text{th}\) row. (Recall that \(\mathbf{u}\otimes\mathbf{v}\) is a matrix whose \(i\), \(j\text{th}\) element is the product of the \(i\text{th}\) component of \(\mathbf{u}\) and the \(j\text{th}\) component of \(\mathbf{v}\).) If v is a unit vector e j, then (2.7.1) adds the components of \(\mathbf{u}\) to the \(j\text{th}\) column. If both \(\mathbf{u}\) and \(\mathbf{v}\) are proportional to unit vectors \(\mathbf{e}_{i}\) and \(\mathbf{e}_{j}\) respectively, then a term is added only to the element \(a_{ij}\).
The Sherman-Morrison formula gives the inverse \((\mathbf{A}+\mathbf{u}\otimes\mathbf{v})^{-1}\), and is derived briefly as follows:
where
The second line of (2.7.2) is a formal power series expansion. In the third line, the associativity of outer and inner products is used to factor out the scalars \(\lambda\).
The use of (2.7.2) is this: Given \(\mathbf{A}^{-1}\) and the vectors \(\mathbf{u}\) and \(\mathbf{v}\), we need only perform two matrix multiplications and a vector dot product,
to get the desired change in the inverse
For some other sparse problems, the Sherman-Morrison formula cannot be directly applied for the simple reason that storage of the whole inverse matrix \(\mathbf{A}^{-1}\) is not feasible. If you want to add only a single correction of the form \(\mathbf{u}\otimes\mathbf{v}\), and solve the linear system
then you proceed as follows. Using the fast method that is presumed available for the matrix \(\mathbf{A}\), solve the two auxiliary problems
for the vectors \(\mathbf{y}\) and \(\mathbf{z}\). In terms of these,
as we see by multiplying (2.7.2) on the right by \(\mathbf{b}\).
Cyclic Tridiagonal Systems
So-called cyclic tridiagonal systems occur quite frequently, and are a good example of how to use the Sherman-Morrison formula in the manner just described. The equations have the form
This is a tridiagonal system, except for the matrix elements \(\alpha\) and \(\beta\) in the corners. Forms like this are typically generated by finite-differencing differential equations with periodic boundary conditions
We use the Sherman-Morrison formula, treating the system as tridiagonal plus a correction. In the notation of equation, define vectors \(\mathbf{u}\) and \(\mathbf{v}\) to be
Here \(\gamma\) is arbitrary for the moment. Then the matrix \(\mathbf{A}\) is the tridiagonal part of the matrix in (2.7.9), with two terms modified:
We now solve equations (2.7.7) with the standard tridiagonal algorithm, and then get the solution from equation (2.7.8)
The routine cyclic below implements this algorithm. We choose the arbitrary parameter \(\gamma=-b_{1}\) to avoid loss of precision by subtraction in the first of equations.