Abrazolica


Home     Archive     Tags     About        RSS
Inverting a Tridiagonal Matrix

This post will show how to invert a tridiagonal matrix in spectral decomposition form. This is a convenient form if you're only interested in one particular element of the inverse and want it as a sum of terms involving the eigenvalues of the matrix.

For any matrix \(\mathbf{A}\) with eigenvalues \(x_k\) and eigenvectors \(\lvert v_k \rangle\) such that:

\[\mathbf{A}\lvert v_k \rangle=x_k\lvert v_k \rangle\]

We can write \(\mathbf{A}\) in spectral decomposition form as:

\[\mathbf{A}=\sum_k x_k \lvert v_k \rangle \langle v_k \rvert\]

Since the inverse of \(\mathbf{A}\) has the same eigenvectors but reciprocal eigenvalues we can likewise write the inverse as:

\[\mathbf{A^{-1}}=\sum_k \frac{\lvert v_k \rangle \langle v_k \rvert}{x_k}\]

Expressing the inverse in this form is often quite useful especially when the eigenvalues and vectors of \(\mathbf{A}\) are known or can easily be calculated. One type of matrix where the eigenvalues and vectors are easily calculated is a Tridiagonal matrix with constant diagonals. A general matrix of this type is \(\mathbf{A}=\mathbf{tridiag}(c,a,b)\) which in the \(n=4\) dimensional case would look like:

\[\begin{pmatrix}a & b & 0 & 0\\c & a & b & 0\\0 & c & a & b\\0 & 0 & c & a\end{pmatrix}\]

The matrix can be made symmetric with a similarity transform

\[\mathbf{B} = \mathbf{S}^{-1}\mathbf{A}\mathbf{S} = \begin{pmatrix}a & \sqrt{bc} & 0 & 0\\\sqrt{bc} & a &\sqrt{bc} & 0\\0 & \sqrt{bc} & a & \sqrt{bc}\\0 & 0 & \sqrt{bc} & a\end{pmatrix}\]

where \(\mathbf{S}\) is the diagonal matrix

\[\mathbf{S} = \begin{pmatrix}1 & 0 & 0 & 0\\0 & (c/b)^{1/2} & 0 & 0\\0 & 0 & c/b & 0\\0 & 0 & 0 & (c/b)^{3/2}\end{pmatrix}\]

How does this affect the eigenvalues? We can write the eigenvalue equation as \(\mathbf{S}^{-1}\mathbf{A}\mathbf{S}\mathbf{S}^{-1}\lvert v_k \rangle=x_k\mathbf{S}^{-1}\lvert v_k \rangle\) or \(\mathbf{B}\mathbf{S}^{-1}\lvert v_k \rangle=x_k\mathbf{S}^{-1}\lvert v_k \rangle\). Therefor \(\mathbf{B}\) has the same eigenvalues as \(\mathbf{A}\) but the eigenvectors become \(\mathbf{S}^{-1}\lvert v_k \rangle\). Let \(\lvert u_k \rangle=\mathbf{S}^{-1}\lvert v_k \rangle\) then the eigenvalue equation for \(\mathbf{B}\) is \(\mathbf{B}\lvert u_k \rangle=x_k\lvert u_k \rangle\). By dividing this equation by \(\sqrt{bc}\) we can simplify the matrix to

\[\mathbf{C} = \mathbf{B}/\sqrt{bc} = \begin{pmatrix}\alpha & 1 & 0 & 0\\1 & \alpha & 1 & 0\\0 & 1 & \alpha & 1\\0 & 0 & 1 & \alpha\end{pmatrix}\]

where \(\alpha=a/\sqrt{bc}\). The eigenvalues similarly become \(y_k=x_k/\sqrt{bc}\) and now we have the eigenvalue equation \(\mathbf{C}\lvert u_k \rangle=y_k\lvert u_k \rangle\). The eigenvalues \(y_k\) are roots of the determinant \(p(y)=\mathrm{det}(y\mathbf{I}-\mathbf{C})\). Let \(p_n(y)\) be this determinant when \(\mathbf{C}\) has dimension \(n\) then the determinant can be calculated recursively using the equation

\[p_n(y)=(y-\alpha)p_{n-1}(y)-p_{n-2}(y)\]

with initial values \(p_0(y)=1\), \(p_1(y)=y-\alpha\). This equation can easily be found using the expansion theorem for determinants.

With the change in variables \(2z=y-\alpha\) the equation becomes the recurrence for the Type II Chebyshev polynomials:

\[U_n(z)=2zU_{n-1}(z)-U_{n-2}(z)\]

whose roots can easily be found. Letting \(z=\cos{\theta}\) we have

\[U_n(\theta)=\frac{\sin{(n+1)\theta}}{\sin{\theta}}\]

The roots of \(U_n(\theta)\) occur at \(\theta_k=k\pi/(n+1)\) therefor the corresponding roots for \(p_n(y)\) are at:

\[y_k=\alpha+2z_k=\alpha+2\cos{\theta_k}\]

These are the eigenvalues for the matrix \(\mathbf{C}\). The eigenvalues for the matrices \(\mathbf{A}\) and \(\mathbf{B}\) are

\[x_k=\sqrt{bc}y_k=a+2\sqrt{bc}\cos{\theta_k}\]

The only thing still needed to construct the inverse is the eigenvectors. The nice thing about symmetric tridiagonal matrices is that they all have the same eigenvectors. The \(i^{\mathrm{th}}\) component of the \(k^{\mathrm{th}}\) eigenvector is given by:

\[\lvert u_k \rangle_i=u_{i,k}=\sqrt{\frac{2}{n+1}}\sin{\frac{ik\pi}{n+1}}\]

We will not derive this result because it is quite easy to verify given the formula for the eigenvalues we derived above. The eigenvectors for the nonsymmetric matrix \(\mathbf{A}\) are then equal to \(\lvert v_k \rangle=\mathbf{S}\lvert u_k \rangle\) where \(\mathbf{S}\) is the diagonal matrix defined above.

In the next post I will show an interesting application of this.


© 2010-2019 Stefan Hollos and Richard Hollos

submit to reddit   

blog comments powered by Disqus