Permutation matrix

Matrix with exactly one 1 per row and column

In mathematics, particularly in matrix theory, a permutation matrix is a square binary matrix that has exactly one entry of 1 in each row and each column with all other entries 0.[1]: 26  An n × n permutation matrix can represent a permutation of n elements. Pre-multiplying an n-row matrix M by a permutation matrix P, forming PM, results in permuting the rows of M, while post-multiplying an n-column matrix M, forming MP, permutes the columns of M.

Every permutation matrix P is orthogonal, with its inverse equal to its transpose: P 1 = P T {\displaystyle P^{-1}=P^{\mathsf {T}}} .[1]: 26  Indeed, permutation matrices can be characterized as the orthogonal matrices whose entries are all non-negative.[2]

The two permutation/matrix correspondences

There are two natural one-to-one correspondences between permutations and permutation matrices, one of which works along the rows of the matrix, the other along its columns. Here is an example, starting with a permutation π in two-line form at the upper left:

π : ( 1 2 3 4 3 2 4 1 ) R π : ( 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 ) C π : ( 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 ) π 1 : ( 1 2 3 4 4 2 1 3 ) {\displaystyle {\begin{matrix}\pi \colon {\begin{pmatrix}1&2&3&4\\3&2&4&1\end{pmatrix}}&\longleftrightarrow &R_{\pi }\colon {\begin{pmatrix}0&0&1&0\\0&1&0&0\\0&0&0&1\\1&0&0&0\end{pmatrix}}\\[5pt]{\Big \updownarrow }&&{\Big \updownarrow }\\[5pt]C_{\pi }\colon {\begin{pmatrix}0&0&0&1\\0&1&0&0\\1&0&0&0\\0&0&1&0\end{pmatrix}}&\longleftrightarrow &\pi ^{-1}\colon {\begin{pmatrix}1&2&3&4\\4&2&1&3\end{pmatrix}}\end{matrix}}}

The row-based correspondence takes the permutation π to the matrix R π {\displaystyle R_{\pi }} at the upper right. The first row of R π {\displaystyle R_{\pi }} has its 1 in the third column because π ( 1 ) = 3 {\displaystyle \pi (1)=3} . More generally, we have R π = ( r i j ) {\displaystyle R_{\pi }=(r_{ij})} where r i j = 1 {\displaystyle r_{ij}=1} when j = π ( i ) {\displaystyle j=\pi (i)} and r i j = 0 {\displaystyle r_{ij}=0} otherwise.

The column-based correspondence takes π to the matrix C π {\displaystyle C_{\pi }} at the lower left. The first column of C π {\displaystyle C_{\pi }} has its 1 in the third row because π ( 1 ) = 3 {\displaystyle \pi (1)=3} . More generally, we have C π = ( c i j ) {\displaystyle C_{\pi }=(c_{ij})} where c i j {\displaystyle c_{ij}} is 1 when i = π ( j ) {\displaystyle i=\pi (j)} and 0 otherwise. Since the two recipes differ only by swapping i with j, the matrix C π {\displaystyle C_{\pi }} is the transpose of R π {\displaystyle R_{\pi }} ; and, since R π {\displaystyle R_{\pi }} is a permutation matrix, we have C π = R π T = R π 1 {\displaystyle C_{\pi }=R_{\pi }^{\mathsf {T}}=R_{\pi }^{-1}} . Tracing the other two sides of the big square, we have R π 1 = C π = R π 1 {\displaystyle R_{\pi ^{-1}}=C_{\pi }=R_{\pi }^{-1}} and C π 1 = R π {\displaystyle C_{\pi ^{-1}}=R_{\pi }} .[3]

Permutation matrices permute rows or columns

Multiplying a matrix M by either R π {\displaystyle R_{\pi }} or C π {\displaystyle C_{\pi }} on either the left or the right will permute either the rows or columns of M by either π or π−1. The details are a bit tricky.

To begin with, when we permute the entries of a vector ( v 1 , , v n ) {\displaystyle (v_{1},\ldots ,v_{n})} by some permutation π, we move the i th {\displaystyle i^{\text{th}}} entry v i {\displaystyle v_{i}} of the input vector into the π ( i ) th {\displaystyle \pi (i)^{\text{th}}} slot of the output vector. Which entry then ends up in, say, the first slot of the output? Answer: The entry v j {\displaystyle v_{j}} for which π ( j ) = 1 {\displaystyle \pi (j)=1} , and hence j = π 1 ( 1 ) {\displaystyle j=\pi ^{-1}(1)} . Arguing similarly about each of the slots, we find that the output vector is

( v π 1 ( 1 ) , v π 1 ( 2 ) , , v π 1 ( n ) ) , {\displaystyle {\big (}v_{\pi ^{-1}(1)},v_{\pi ^{-1}(2)},\ldots ,v_{\pi ^{-1}(n)}{\big )},}

even though we are permuting by π {\displaystyle \pi } , not by π 1 {\displaystyle \pi ^{-1}} . Thus, in order to permute the entries by π {\displaystyle \pi } , we must permute the indices by π 1 {\displaystyle \pi ^{-1}} .[1]: 25  (Permuting the entries by π {\displaystyle \pi } is sometimes called taking the alibi viewpoint, while permuting the indices by π {\displaystyle \pi } would take the alias viewpoint.[4])

Now, suppose that we pre-multiply some n-row matrix M = ( m i , j ) {\displaystyle M=(m_{i,j})} by the permutation matrix C π {\displaystyle C_{\pi }} . By the rule for matrix multiplication, the ( i , j ) th {\displaystyle (i,j)^{\text{th}}} entry in the product C π M {\displaystyle C_{\pi }M} is

k = 1 n c i , k m k , j , {\displaystyle \sum _{k=1}^{n}c_{i,k}m_{k,j},}

where c i , k {\displaystyle c_{i,k}} is 0 except when i = π ( k ) {\displaystyle i=\pi (k)} , when it is 1. Thus, the only term in the sum that survives is the term in which k = π 1 ( i ) {\displaystyle k=\pi ^{-1}(i)} , and the sum reduces to m π 1 ( i ) , j {\displaystyle m_{\pi ^{-1}(i),j}} . Since we have permuted the row index by π 1 {\displaystyle \pi ^{-1}} , we have permuted the rows of M themselves by π.[1]: 25  A similar argument shows that post-multiplying an n-column matrix M by R π {\displaystyle R_{\pi }} permutes its columns by π.

The other two options are pre-multiplying by R π {\displaystyle R_{\pi }} or post-multiplying by C π {\displaystyle C_{\pi }} , and they permute the rows or columns respectively by π−1, instead of by π.

The transpose is also the inverse

A related argument proves that, as we claimed above, the transpose of any permutation matrix P also acts as its inverse, which implies that P is invertible. (Artin leaves that proof as an exercise,[1]: 26  which we here solve.) If P = ( p i , j ) {\displaystyle P=(p_{i,j})} , then the ( i , j ) th {\displaystyle (i,j)^{\text{th}}} entry of its transpose P T {\displaystyle P^{\mathsf {T}}} is p j , i {\displaystyle p_{j,i}} . The ( i , j ) th {\displaystyle (i,j)^{\text{th}}} entry of the product P P T {\displaystyle PP^{\mathsf {T}}} is then

k = 1 n p i , k p j , k . {\displaystyle \sum _{k=1}^{n}p_{i,k}p_{j,k}.}

Whenever i j {\displaystyle i\neq j} , the k th {\displaystyle k^{\text{th}}} term in this sum is the product of two different entries in the k th {\displaystyle k^{\text{th}}} column of P; so all terms are 0, and the sum is 0. When i = j {\displaystyle i=j} , we are summing the squares of the entries in the i th {\displaystyle i^{\text{th}}} row of P, so the sum is 1. The product P P T {\displaystyle PP^{\mathsf {T}}} is thus the identity matrix. A symmetric argument shows the same for P T P {\displaystyle P^{\mathsf {T}}P} , implying that P is invertible with P 1 = P T {\displaystyle P^{-1}=P^{\mathsf {T}}} .

Multiplying permutation matrices

Given two permutations of n elements 𝜎 and 𝜏, the product of the corresponding column-based permutation matrices Cσ and Cτ is given,[1]: 25  as you might expect, by

C σ C τ = C σ τ , {\displaystyle C_{\sigma }C_{\tau }=C_{\sigma \,\circ \,\tau },}
where the composed permutation σ τ {\displaystyle \sigma \circ \tau } applies first 𝜏 and then 𝜎, working from right to left:
( σ τ ) ( k ) = σ ( τ ( k ) ) . {\displaystyle (\sigma \circ \tau )(k)=\sigma \left(\tau (k)\right).}
This follows because pre-multiplying some matrix by Cτ and then pre-multiplying the resulting product by Cσ gives the same result as pre-multiplying just once by the combined C σ τ {\displaystyle C_{\sigma \,\circ \,\tau }} .

For the row-based matrices, there is a twist: The product of Rσ and Rτ is given by

R σ R τ = R τ σ , {\displaystyle R_{\sigma }R_{\tau }=R_{\tau \,\circ \,\sigma },}

with 𝜎 applied before 𝜏 in the composed permutation. This happens because we must post-multiply to avoid inversions under the row-based option, so we would post-multiply first by Rσ and then by Rτ.

Some people, when applying a function to an argument, write the function after the argument (postfix notation), rather than before it. When doing linear algebra, they work with linear spaces of row vectors, and they apply a linear map to an argument by using the map's matrix to post-multiply the argument's row vector. They often use a left-to-right composition operator, which we here denote using a semicolon; so the composition σ ; τ {\displaystyle \sigma \,;\,\tau } is defined either by

( σ ; τ ) ( k ) = τ ( σ ( k ) ) , {\displaystyle (\sigma \,;\,\tau )(k)=\tau \left(\sigma (k)\right),}

or, more elegantly, by

( k ) ( σ ; τ ) = ( ( k ) σ ) τ , {\displaystyle (k)(\sigma \,;\,\tau )=\left((k)\sigma \right)\tau ,}

with 𝜎 applied first. That notation gives us a simpler rule for multiplying row-based permutation matrices:

R σ R τ = R σ ; τ . {\displaystyle R_{\sigma }R_{\tau }=R_{\sigma \,;\,\tau }.}

Matrix group

When π is the identity permutation, which has π ( i ) = i {\displaystyle \pi (i)=i} for all i, both Cπ and Rπ are the identity matrix.

There are n! permutation matrices, since there are n! permutations and the map C : π C π {\displaystyle C\colon \pi \mapsto C_{\pi }} is a one-to-one correspondence between permutations and permutation matrices. (The map R {\displaystyle R} is another such correspondence.) By the formulas above, those n × n permutation matrices form a group of order n! under matrix multiplication, with the identity matrix as its identity element, a group that we denote P n {\displaystyle {\mathcal {P}}_{n}} . The group P n {\displaystyle {\mathcal {P}}_{n}} is a subgroup of the general linear group G L n ( R ) {\displaystyle GL_{n}(\mathbb {R} )} of invertible n × n matrices of real numbers. Indeed, for any field F, the group P n {\displaystyle {\mathcal {P}}_{n}} is also a subgroup of the group G L n ( F ) {\displaystyle GL_{n}(F)} , where the matrix entries belong to F. (Every field contains 0 and 1 with 0 + 0 = 0 , {\displaystyle 0+0=0,} 0 + 1 = 1 , {\displaystyle 0+1=1,} 0 0 = 0 , {\displaystyle 0*0=0,} 0 1 = 0 , {\displaystyle 0*1=0,} and 1 1 = 1 ; {\displaystyle 1*1=1;} and that's all we need to multiply permutation matrices. Different fields disagree about whether 1 + 1 = 0 {\displaystyle 1+1=0} , but that sum doesn't arise.)

Let S n {\displaystyle S_{n}^{\leftarrow }} denote the symmetric group, or group of permutations, on {1,2,...,n} where the group operation is the standard, right-to-left composition " {\displaystyle \circ } "; and let S n {\displaystyle S_{n}^{\rightarrow }} denote the opposite group, which uses the left-to-right composition " ; {\displaystyle \,;\,} ". The map C : S n G L n ( R ) {\displaystyle C\colon S_{n}^{\leftarrow }\to GL_{n}(\mathbb {R} )} that takes π to its column-based matrix C π {\displaystyle C_{\pi }} is a faithful representation, and similarly for the map R : S n G L n ( R ) {\displaystyle R\colon S_{n}^{\rightarrow }\to GL_{n}(\mathbb {R} )} that takes π to R π {\displaystyle R_{\pi }} .

Doubly stochastic matrices

Every permutation matrix is doubly stochastic. The set of all doubly stochastic matrices is called the Birkhoff polytope, and the permutation matrices play a special role in that polytope. The Birkhoff–von Neumann theorem says that every doubly stochastic real matrix is a convex combination of permutation matrices of the same order, with the permutation matrices being precisely the extreme points (the vertices) of the Birkhoff polytope. The Birkhoff polytope is thus the convex hull of the permutation matrices.[5]

Linear-algebraic properties

Just as each permutation is associated with two permutation matrices, each permutation matrix is associated with two permutations, as we can see by relabeling the example in the big square above starting with the matrix P at the upper right:

ρ P : ( 1 2 3 4 3 2 4 1 ) P : ( 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 ) P 1 : ( 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 ) κ P : ( 1 2 3 4 4 2 1 3 ) {\displaystyle {\begin{matrix}\rho _{P}\colon {\begin{pmatrix}1&2&3&4\\3&2&4&1\end{pmatrix}}&\longleftrightarrow &P\colon {\begin{pmatrix}0&0&1&0\\0&1&0&0\\0&0&0&1\\1&0&0&0\end{pmatrix}}\\[5pt]{\Big \updownarrow }&&{\Big \updownarrow }\\[5pt]P^{-1}\colon {\begin{pmatrix}0&0&0&1\\0&1&0&0\\1&0&0&0\\0&0&1&0\end{pmatrix}}&\longleftrightarrow &\kappa _{P}\colon {\begin{pmatrix}1&2&3&4\\4&2&1&3\end{pmatrix}}\end{matrix}}}

So we are here denoting the inverse of C as κ {\displaystyle \kappa } and the inverse of R as ρ {\displaystyle \rho } . We can then compute the linear-algebraic properties of P from some combinatorial properties that are shared by the two permutations κ P {\displaystyle \kappa _{P}} and ρ P = κ P 1 {\displaystyle \rho _{P}=\kappa _{P}^{-1}} .

A point is fixed by κ P {\displaystyle \kappa _{P}} just when it is fixed by ρ P {\displaystyle \rho _{P}} , and the trace of P is the number of such shared fixed points.[1]: 322  If the integer k is one of them, then the standard basis vector ek is an eigenvector of P.[1]: 118 

To calculate the complex eigenvalues of P, write the permutation κ P {\displaystyle \kappa _{P}} as a composition of disjoint cycles, say κ P = c 1 c 2 c t {\displaystyle \kappa _{P}=c_{1}c_{2}\cdots c_{t}} . (Permutations of disjoint subsets commute, so it doesn't matter here whether we are composing right-to-left or left-to-right.) For 1 i t {\displaystyle 1\leq i\leq t} , let the length of the cycle c i {\displaystyle c_{i}} be i {\displaystyle \ell _{i}} , and let L i {\displaystyle L_{i}} be the set of complex solutions of x i = 1 {\displaystyle x^{\ell _{i}}=1} , those solutions being the i th {\displaystyle \ell _{i}^{\,{\text{th}}}} roots of unity. The multiset union of the L i {\displaystyle L_{i}} is then the multiset of eigenvalues of P. Since writing ρ P {\displaystyle \rho _{P}} as a product of cycles would give the same number of cycles of the same lengths, analyzing ρ p {\displaystyle \rho _{p}} would give the same result. The multiplicity of any eigenvalue v is the number of i for which L i {\displaystyle L_{i}} contains v.[6] (Since any permutation matrix is normal and any normal matrix is diagonalizable over the complex numbers,[1]: 259  the algebraic and geometric multiplicities of an eigenvalue v are the same.)

From group theory we know that any permutation may be written as a composition of transpositions. Therefore, any permutation matrix factors as a product of row-switching elementary matrices, each of which has determinant −1. Thus, the determinant of the permutation matrix P is the sign of the permutation κ P {\displaystyle \kappa _{P}} , which is also the sign of ρ P {\displaystyle \rho _{P}} .

Restricted forms

  • Costas array, a permutation matrix in which the displacement vectors between the entries are all distinct
  • n-queens puzzle, a permutation matrix in which there is at most one entry in each diagonal and antidiagonal

See also

References

  1. ^ a b c d e f g h i Artin, Michael (1991). Algebra. Prentice Hall. pp. 24–26, 118, 259, 322.
  2. ^ Zavlanos, Michael M.; Pappas, George J. (November 2008). "A dynamical systems approach to weighted graph matching". Automatica. 44 (11): 2817–2824. CiteSeerX 10.1.1.128.6870. doi:10.1016/j.automatica.2008.04.009. S2CID 834305. Retrieved 21 August 2022. Let O n {\displaystyle O_{n}} denote the set of n × n {\displaystyle n\times n} orthogonal matrices and N n {\displaystyle N_{n}} denote the set of n × n {\displaystyle n\times n} element-wise non-negative matrices. Then, P n = O n N n {\displaystyle P_{n}=O_{n}\cap N_{n}} , where P n {\displaystyle P_{n}} is the set of n × n {\displaystyle n\times n} permutation matrices.
  3. ^ This terminology is not standard. Most authors use just one of the two correspondences, choosing which to be consistent with their other conventions. For example, Artin uses the column-based correspondence. We have here invented two names in order to discuss both options.
  4. ^ Conway, John H.; Burgiel, Heidi; Goodman-Strauss, Chaim (2008). The Symmetries of Things. A K Peters. p. 179. A permutation---say, of the names of a number of people---can be thought of as moving either the names or the people. The alias viewpoint regards the permutation as assigning a new name or alias to each person (from the Latin alias = otherwise). Alternatively, from the alibi viewoint we move the people to the places corresponding to their new names (from the Latin alibi = in another place.)
  5. ^ Brualdi (2006) p.19
  6. ^ J Najnudel, A Nikeghbali 2010 p.4
  • Brualdi, Richard A. (2006). Combinatorial matrix classes. Encyclopedia of Mathematics and Its Applications. Vol. 108. Cambridge: Cambridge University Press. ISBN 0-521-86565-4. Zbl 1106.05001.
  • Joseph, Najnudel; Ashkan, Nikeghbali (2010), The Distribution of Eigenvalues of Randomized Permutation Matrices, arXiv:1005.0402, Bibcode:2010arXiv1005.0402N
  • v
  • t
  • e
Matrix classes
Explicitly constrained entriesConstantConditions on eigenvalues or eigenvectorsSatisfying conditions on products or inversesWith specific applicationsUsed in statisticsUsed in graph theoryUsed in science and engineeringRelated terms
Authority control databases: National Edit this at Wikidata
  • Germany