Hessenberg matrix

Kind of square matrix in linear algebra

In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal.[1] They are named after Karl Hessenberg.[2]

A Hessenberg decomposition is a matrix decomposition of a matrix A {\displaystyle A} into a unitary matrix P {\displaystyle P} and a Hessenberg matrix H {\displaystyle H} such that

P H P = A {\displaystyle PHP^{*}=A}
where P {\displaystyle P^{*}} denotes the conjugate transpose.

Definitions

Upper Hessenberg matrix

A square n × n {\displaystyle n\times n} matrix A {\displaystyle A} is said to be in upper Hessenberg form or to be an upper Hessenberg matrix if a i , j = 0 {\displaystyle a_{i,j}=0} for all i , j {\displaystyle i,j} with i > j + 1 {\displaystyle i>j+1} .

An upper Hessenberg matrix is called unreduced if all subdiagonal entries are nonzero, i.e. if a i + 1 , i 0 {\displaystyle a_{i+1,i}\neq 0} for all i { 1 , , n 1 } {\displaystyle i\in \{1,\ldots ,n-1\}} .[3]

Lower Hessenberg matrix

A square n × n {\displaystyle n\times n} matrix A {\displaystyle A} is said to be in lower Hessenberg form or to be a lower Hessenberg matrix if its transpose {\displaystyle } is an upper Hessenberg matrix or equivalently if a i , j = 0 {\displaystyle a_{i,j}=0} for all i , j {\displaystyle i,j} with j > i + 1 {\displaystyle j>i+1} .

A lower Hessenberg matrix is called unreduced if all superdiagonal entries are nonzero, i.e. if a i , i + 1 0 {\displaystyle a_{i,i+1}\neq 0} for all i { 1 , , n 1 } {\displaystyle i\in \{1,\ldots ,n-1\}} .

Examples

Consider the following matrices.

A = [ 1 4 2 3 3 4 1 7 0 2 3 4 0 0 1 3 ] {\displaystyle A={\begin{bmatrix}1&4&2&3\\3&4&1&7\\0&2&3&4\\0&0&1&3\\\end{bmatrix}}}
B = [ 1 2 0 0 5 2 3 0 3 4 3 7 5 6 1 1 ] {\displaystyle B={\begin{bmatrix}1&2&0&0\\5&2&3&0\\3&4&3&7\\5&6&1&1\\\end{bmatrix}}}
C = [ 1 2 0 0 5 2 0 0 3 4 3 7 5 6 1 1 ] {\displaystyle C={\begin{bmatrix}1&2&0&0\\5&2&0&0\\3&4&3&7\\5&6&1&1\\\end{bmatrix}}}

The matrix A {\displaystyle A} is an upper unreduced Hessenberg matrix, B {\displaystyle B} is a lower unreduced Hessenberg matrix and C {\displaystyle C} is a lower Hessenberg matrix but is not unreduced.

Computer programming

Many linear algebra algorithms require significantly less computational effort when applied to triangular matrices, and this improvement often carries over to Hessenberg matrices as well. If the constraints of a linear algebra problem do not allow a general matrix to be conveniently reduced to a triangular one, reduction to Hessenberg form is often the next best thing. In fact, reduction of any matrix to a Hessenberg form can be achieved in a finite number of steps (for example, through Householder's transformation of unitary similarity transforms). Subsequent reduction of Hessenberg matrix to a triangular matrix can be achieved through iterative procedures, such as shifted QR-factorization. In eigenvalue algorithms, the Hessenberg matrix can be further reduced to a triangular matrix through Shifted QR-factorization combined with deflation steps. Reducing a general matrix to a Hessenberg matrix and then reducing further to a triangular matrix, instead of directly reducing a general matrix to a triangular matrix, often economizes the arithmetic involved in the QR algorithm for eigenvalue problems.

Reduction to Hessenberg matrix

Householder transformations

Any n × n {\displaystyle n\times n} matrix can be transformed into a Hessenberg matrix by a similarity transformation using Householder transformations. The following procedure for such a transformation is adapted from A Second Course In Linear Algebra by Garcia & Roger.[4]

Let A {\displaystyle A} be any real or complex n × n {\displaystyle n\times n} matrix, then let A {\displaystyle A^{\prime }} be the ( n 1 ) × n {\displaystyle (n-1)\times n} submatrix of A {\displaystyle A} constructed by removing the first row in A {\displaystyle A} and let a 1 {\displaystyle \mathbf {a} _{1}^{\prime }} be the first column of A {\displaystyle A'} . Construct the ( n 1 ) × ( n 1 ) {\displaystyle (n-1)\times (n-1)} householder matrix V 1 = I ( n 1 ) 2 w w w 2 {\displaystyle V_{1}=I_{(n-1)}-2{\frac {ww^{*}}{\|w\|^{2}}}} where

w = { a 1 2 e 1 a 1 , a 11 = 0 a 1 2 e 1 + a 11 ¯ | a 11 | a 1 , a 11 0 {\displaystyle w={\begin{cases}\|\mathbf {a} _{1}^{\prime }\|_{2}\mathbf {e} _{1}-\mathbf {a} _{1}^{\prime }\;\;\;\;\;\;\;\;,\;\;\;a_{11}^{\prime }=0\\\|\mathbf {a} _{1}^{\prime }\|_{2}\mathbf {e} _{1}+{\frac {\overline {a_{11}^{\prime }}}{|a_{11}^{\prime }|}}\mathbf {a} _{1}^{\prime }\;\;\;,\;\;\;a_{11}^{\prime }\neq 0\\\end{cases}}}

This householder matrix will map a 1 {\displaystyle \mathbf {a} _{1}^{\prime }} to a 1 e 1 {\displaystyle \|\mathbf {a} _{1}^{\prime }\|\mathbf {e} _{1}} and as such, the block matrix U 1 = [ 1 0 0 V 1 ] {\displaystyle U_{1}={\begin{bmatrix}1&\mathbf {0} \\\mathbf {0} &V_{1}\end{bmatrix}}} will map the matrix A {\displaystyle A} to the matrix U 1 A {\displaystyle U_{1}A} which has only zeros below the second entry of the first column. Now construct ( n 2 ) × ( n 2 ) {\displaystyle (n-2)\times (n-2)} householder matrix V 2 {\displaystyle V_{2}} in a similar manner as V 1 {\displaystyle V_{1}} such that V 2 {\displaystyle V_{2}} maps the first column of A {\displaystyle A^{\prime \prime }} to a 1 e 1 {\displaystyle \|\mathbf {a} _{1}^{\prime \prime }\|\mathbf {e} _{1}} , where A {\displaystyle A^{\prime \prime }} is the submatrix of A {\displaystyle A^{\prime }} constructed by removing the first row and the first column of A {\displaystyle A^{\prime }} , then let U 2 = [ 1 0 0 0 1 0 0 0 V 2 ] {\displaystyle U_{2}={\begin{bmatrix}1&0&0\\0&1&0\\0&0&V_{2}\end{bmatrix}}} which maps U 1 A {\displaystyle U_{1}A} to the matrix U 2 U 1 A {\displaystyle U_{2}U_{1}A} which has only zeros below the first and second entry of the subdiagonal. Now construct V 3 {\displaystyle V_{3}} and then U 3 {\displaystyle U_{3}} in a similar manner, but for the matrix A {\displaystyle A^{\prime \prime \prime }} constructed by removing the first row and first column of A {\displaystyle A^{\prime \prime }} and proceed as in the previous steps. Continue like this for a total of n 2 {\displaystyle n-2} steps.

By construction of U k {\displaystyle U_{k}} , the first k {\displaystyle k} columns of any n × n {\displaystyle n\times n} matrix are invariant under multiplication by U k {\displaystyle U_{k}^{*}} from the right. Hence, any matrix can be transformed to an upper Hessenberg matrix by a similarity transformation of the form U ( n 2 ) ( ( U 2 ( U 1 A U 1 ) U 2 ) ) U ( n 2 ) = U ( n 2 ) U 2 U 1 A ( U ( n 2 ) U 2 U 1 ) = U A U {\displaystyle U_{(n-2)}(\dots (U_{2}(U_{1}AU_{1}^{*})U_{2}^{*})\dots )U_{(n-2)}^{*}=U_{(n-2)}\dots U_{2}U_{1}A(U_{(n-2)}\dots U_{2}U_{1})^{*}=UAU^{*}} .

Jacobi (Givens) rotations

A Jacobi rotation (also called Givens rotation) is an orthogonal matrix transformation in the form

A A = J ( p , q , θ ) T A J ( p , q , θ ) , {\displaystyle A\to A'=J(p,q,\theta )^{T}AJ(p,q,\theta )\;,}

where J ( p , q , θ ) {\displaystyle J(p,q,\theta )} , p < q {\displaystyle p<q} , is the Jacobi rotation matrix with all matrix elements equal zero except for

{ J ( p , q , θ ) i i = 1 i p , q J ( p , q , θ ) p p = cos ( θ ) J ( p , q , θ ) q q = cos ( θ ) J ( p , q , θ ) p q = sin ( θ ) J ( p , q , θ ) q p = sin ( θ ) . {\displaystyle \left\{{\begin{aligned}J(p,q,\theta )_{ii}&{}=1\;\forall i\neq p,q\\J(p,q,\theta )_{pp}&{}=\cos(\theta )\\J(p,q,\theta )_{qq}&{}=\cos(\theta )\\J(p,q,\theta )_{pq}&{}=\sin(\theta )\\J(p,q,\theta )_{qp}&{}=-\sin(\theta )\;.\end{aligned}}\right.}

One can zero the matrix element A p 1 , q {\displaystyle A'_{p-1,q}} by choosing the rotation angle θ {\displaystyle \theta } to satisfy the equation

A p 1 , p sin θ + A p 1 , q cos θ = 0 , {\displaystyle A_{p-1,p}\sin \theta +A_{p-1,q}\cos \theta =0\;,}

Now, the sequence of such Jacobi rotations with the following ( p , q ) {\displaystyle (p,q)}

( p , q ) = ( 2 , 3 ) , ( 2 , 4 ) , , ( 2 , n ) , ( 3 , 4 ) , , ( 3 , n ) , , ( n 1 , n ) {\displaystyle (p,q)=(2,3),(2,4),\dots ,(2,n),(3,4),\dots ,(3,n),\dots ,(n-1,n)}

reduces the matrix A {\displaystyle A} to the lower Hessenberg form.[5]

Properties

For n { 1 , 2 } {\displaystyle n\in \{1,2\}} , it is vacuously true that every n × n {\displaystyle n\times n} matrix is both upper Hessenberg, and lower Hessenberg.[6]

The product of a Hessenberg matrix with a triangular matrix is again Hessenberg. More precisely, if A {\displaystyle A} is upper Hessenberg and T {\displaystyle T} is upper triangular, then A T {\displaystyle AT} and T A {\displaystyle TA} are upper Hessenberg.

A matrix that is both upper Hessenberg and lower Hessenberg is a tridiagonal matrix, of which the Jacobi matrix is an important example. This includes the symmetric or Hermitian Hessenberg matrices. A Hermitian matrix can be reduced to tri-diagonal real symmetric matrices.[7]

Hessenberg operator

The Hessenberg operator is an infinite dimensional Hessenberg matrix. It commonly occurs as the generalization of the Jacobi operator to a system of orthogonal polynomials for the space of square-integrable holomorphic functions over some domain—that is, a Bergman space. In this case, the Hessenberg operator is the right-shift operator S {\displaystyle S} , given by

[ S f ] ( z ) = z f ( z ) . {\displaystyle [Sf](z)=zf(z).}

The eigenvalues of each principal submatrix of the Hessenberg operator are given by the characteristic polynomial for that submatrix. These polynomials are called the Bergman polynomials, and provide an orthogonal polynomial basis for Bergman space.

See also

Notes

  1. ^ Horn & Johnson (1985), page 28; Stoer & Bulirsch (2002), page 251
  2. ^ Biswa Nath Datta (2010) Numerical Linear Algebra and Applications, 2nd Ed., Society for Industrial and Applied Mathematics (SIAM) ISBN 978-0-89871-685-6, p. 307
  3. ^ Horn & Johnson 1985, p. 35
  4. ^ Ramon Garcia, Stephan; Horn, Roger (2017). A Second Course In Linear Algebra. Cambridge University Press. ISBN 9781107103818.
  5. ^ Bini, Dario A.; Robol, Leonardo (2016). "Quasiseparable Hessenberg reduction of real diagonal plus low rank matrices and applications". Linear Algebra and Its Applications. 502: 186–213. arXiv:1501.07812. doi:10.1016/j.laa.2015.08.026.
  6. ^ Lecture Notes. Notes for 2016-10-21 Cornell University
  7. ^ "Computational Routines (eigenvalues) in LAPACK". sites.science.oregonstate.edu. Retrieved 2020-05-24.

References

  • Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6.
  • Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-95452-3.
  • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 11.6.2. Reduction to Hessenberg Form", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8

External links

  • Hessenberg matrix at MathWorld.
  • Hessenberg matrix at PlanetMath.
  • High performance algorithms for reduction to condensed (Hessenberg, tridiagonal, bidiagonal) form
  • Algorithm overview
  • v
  • t
  • e
Matrix classes
Explicitly constrained entriesConstantConditions on eigenvalues or eigenvectorsSatisfying conditions on products or inversesWith specific applicationsUsed in statisticsUsed in graph theoryUsed in science and engineeringRelated terms