Pseudo-determinant

In linear algebra and statistics, the pseudo-determinant[1] is the product of all non-zero eigenvalues of a square matrix. It coincides with the regular determinant when the matrix is non-singular.

Definition

The pseudo-determinant of a square n-by-n matrix A may be defined as:

| A | + = lim α 0 | A + α I | α n rank ( A ) {\displaystyle |\mathbf {A} |_{+}=\lim _{\alpha \to 0}{\frac {|\mathbf {A} +\alpha \mathbf {I} |}{\alpha ^{n-\operatorname {rank} (\mathbf {A} )}}}}

where |A| denotes the usual determinant, I denotes the identity matrix and rank(A) denotes the rank of A.[2]

Definition of pseudo-determinant using Vahlen matrix

The Vahlen matrix of a conformal transformation, the Möbius transformation (i.e. ( a x + b ) ( c x + d ) 1 {\displaystyle (ax+b)(cx+d)^{-1}} for a , b , c , d G ( p , q ) {\displaystyle a,b,c,d\in {\mathcal {G}}(p,q)} ), is defined as [ f ] = [ a b c d ] {\displaystyle [f]={\begin{bmatrix}a&b\\c&d\end{bmatrix}}} . By the pseudo-determinant of the Vahlen matrix for the conformal transformation, we mean

pdet [ a b c d ] = a d b c . {\displaystyle \operatorname {pdet} {\begin{bmatrix}a&b\\c&d\end{bmatrix}}=ad^{\dagger }-bc^{\dagger }.}

If pdet [ f ] > 0 {\displaystyle \operatorname {pdet} [f]>0} , the transformation is sense-preserving (rotation) whereas if the pdet [ f ] < 0 {\displaystyle \operatorname {pdet} [f]<0} , the transformation is sense-preserving (reflection).

Computation for positive semi-definite case

If A {\displaystyle A} is positive semi-definite, then the singular values and eigenvalues of A {\displaystyle A} coincide. In this case, if the singular value decomposition (SVD) is available, then | A | + {\displaystyle |\mathbf {A} |_{+}} may be computed as the product of the non-zero singular values. If all singular values are zero, then the pseudo-determinant is 1.

Supposing rank ( A ) = k {\displaystyle \operatorname {rank} (A)=k} , so that k is the number of non-zero singular values, we may write A = P P {\displaystyle A=PP^{\dagger }} where P {\displaystyle P} is some n-by-k matrix and the dagger is the conjugate transpose. The singular values of A {\displaystyle A} are the squares of the singular values of P {\displaystyle P} and thus we have | A | + = | P P | {\displaystyle |A|_{+}=\left|P^{\dagger }P\right|} , where | P P | {\displaystyle \left|P^{\dagger }P\right|} is the usual determinant in k dimensions. Further, if P {\displaystyle P} is written as the block column P = ( C D ) {\displaystyle P=\left({\begin{smallmatrix}C\\D\end{smallmatrix}}\right)} , then it holds, for any heights of the blocks C {\displaystyle C} and D {\displaystyle D} , that | A | + = | C C + D D | {\displaystyle |A|_{+}=\left|C^{\dagger }C+D^{\dagger }D\right|} .

Application in statistics

If a statistical procedure ordinarily compares distributions in terms of the determinants of variance-covariance matrices then, in the case of singular matrices, this comparison can be undertaken by using a combination of the ranks of the matrices and their pseudo-determinants, with the matrix of higher rank being counted as "largest" and the pseudo-determinants only being used if the ranks are equal.[3] Thus pseudo-determinants are sometime presented in the outputs of statistical programs in cases where covariance matrices are singular.[4]

See also

References

  1. ^ Minka, T.P. (2001). "Inferring a Gaussian Distribution". PDF
  2. ^ Florescu, Ionut (2014). Probability and Stochastic Processes. Wiley. p. 529.
  3. ^ SAS documentation on "Robust Distance"
  4. ^ Bohling, Geoffrey C. (1997) "GSLIB-style programs for discriminant analysis and regionalized classification", Computers & Geosciences, 23 (7), 739–761 doi:10.1016/S0098-3004(97)00050-2