Bose–Mesner algebra

In mathematics, a Bose–Mesner algebra is a special set of matrices which arise from a combinatorial structure known as an association scheme, together with the usual set of rules for combining (forming the products of) those matrices, such that they form an associative algebra, or, more precisely, a unitary commutative algebra. Among these rules are:

  • the result of a product is also within the set of matrices,
  • there is an identity matrix in the set, and
  • taking products is commutative.

Bose–Mesner algebras have applications in physics to spin models, and in statistics to the design of experiments. They are named for R. C. Bose and Dale Marsh Mesner.[1]

Definition

Let X be a set of v elements. Consider a partition of the 2-element subsets of X into n non-empty subsets, R1, ..., Rn such that:

  • given an x X {\displaystyle x\in X} , the number of y X {\displaystyle y\in X} such that { x , y } R i {\displaystyle \{x,y\}\in R_{i}} depends only on i (and not on x). This number will be denoted by vi, and
  • given x , y X {\displaystyle x,y\in X} with { x , y } R k {\displaystyle \{x,y\}\in R_{k}} , the number of z X {\displaystyle z\in X} such that { x , z } R i {\displaystyle \{x,z\}\in R_{i}} and { z , y } R j {\displaystyle \{z,y\}\in R_{j}} depends only on i,j and k (and not on x and y). This number will be denoted by p i j k {\displaystyle p_{ij}^{k}} .

This structure is enhanced by adding all pairs of repeated elements of X and collecting them in a subset R0. This enhancement permits the parameters i, j, and k to take on the value of zero, and lets some of x,y or z be equal.

A set with such an enhanced partition is called an association scheme.[2] One may view an association scheme as a partition of the edges of a complete graph (with vertex set X) into n classes, often thought of as color classes. In this representation, there is a loop at each vertex and all the loops receive the same 0th color.

The association scheme can also be represented algebraically. Consider the matrices Di defined by:

( D i ) x , y = { 1 , if  ( x , y ) R i , 0 , otherwise. ( 1 ) {\displaystyle (D_{i})_{x,y}={\begin{cases}1,&{\text{if }}\left(x,y\right)\in R_{i},\\0,&{\text{otherwise.}}\end{cases}}\qquad (1)}

Let A {\displaystyle {\mathcal {A}}} be the vector space consisting of all matrices i = 0 n a i D i {\displaystyle \sideset {}{_{i=0}^{n}}\sum a_{i}D_{i}} , with a i {\displaystyle a_{i}} complex.[3][4]

The definition of an association scheme is equivalent to saying that the D i {\displaystyle D_{i}} are v × v (0,1)-matrices which satisfy

  1. D i {\displaystyle D_{i}} is symmetric,
  2. i = 0 n D i = J {\displaystyle \sum _{i=0}^{n}D_{i}=J} (the all-ones matrix),
  3. D 0 = I , {\displaystyle D_{0}=I,}
  4. D i D j = k = 0 n p i j k D k = D j D i , i , j = 0 , , n . {\displaystyle D_{i}D_{j}=\sum _{k=0}^{n}p_{ij}^{k}D_{k}=D_{j}D_{i},\qquad i,j=0,\ldots ,n.}

The (x,y)-th entry of the left side of 4. is the number of two colored paths of length two joining x and y (using "colors" i and j) in the graph. Note that the rows and columns of D i {\displaystyle D_{i}} contain v i {\displaystyle v_{i}} 1s:

D i J = J D i = v i J . ( 2 ) {\displaystyle D_{i}J=JD_{i}=v_{i}J.\qquad (2)}

From 1., these matrices are symmetric. From 2., D 0 , , D n {\displaystyle D_{0},\ldots ,D_{n}} are linearly independent, and the dimension of A {\displaystyle {\mathcal {A}}} is n + 1 {\displaystyle n+1} . From 4., A {\displaystyle {\mathcal {A}}} is closed under multiplication, and multiplication is always associative. This associative commutative algebra A {\displaystyle {\mathcal {A}}} is called the Bose–Mesner algebra of the association scheme. Since the matrices in A {\displaystyle {\mathcal {A}}} are symmetric and commute with each other, they can be simultaneously diagonalized. This means that there is a matrix S {\displaystyle S} such that to each A A {\displaystyle A\in {\mathcal {A}}} there is a diagonal matrix Λ A {\displaystyle \Lambda _{A}} with S 1 A S = Λ A {\displaystyle S^{-1}AS=\Lambda _{A}} . This means that A {\displaystyle {\mathcal {A}}} is semi-simple and has a unique basis of primitive idempotents J 0 , , J n {\displaystyle J_{0},\ldots ,J_{n}} . These are complex n × n matrices satisfying

J i 2 = J i , i = 0 , , n , ( 3 ) {\displaystyle J_{i}^{2}=J_{i},i=0,\ldots ,n,\qquad (3)}
J i J k = 0 , i k , ( 4 ) {\displaystyle J_{i}J_{k}=0,i\neq k,\qquad (4)}
i = 0 n J i = I . ( 5 ) {\displaystyle \sum _{i=0}^{n}J_{i}=I.\qquad (5)}

The Bose–Mesner algebra has two distinguished bases: the basis consisting of the adjacency matrices D i {\displaystyle D_{i}} , and the basis consisting of the irreducible idempotent matrices E k {\displaystyle E_{k}} . By definition, there exist well-defined complex numbers such that

D i = k = 0 n p i ( k ) E k , ( 6 ) {\displaystyle D_{i}=\sum _{k=0}^{n}p_{i}(k)E_{k},\qquad (6)}

and

| X | E k = i = 0 n q k ( i ) D i . ( 7 ) {\displaystyle |X|E_{k}=\sum _{i=0}^{n}q_{k}\left(i\right)D_{i}.\qquad (7)}

The p-numbers p i ( k ) {\displaystyle p_{i}(k)} , and the q-numbers q k ( i ) {\displaystyle q_{k}(i)} , play a prominent role in the theory.[5] They satisfy well-defined orthogonality relations. The p-numbers are the eigenvalues of the adjacency matrix D i {\displaystyle D_{i}} .

Theorem

The eigenvalues of p i ( k ) {\displaystyle p_{i}(k)} and q k ( i ) {\displaystyle q_{k}(i)} , satisfy the orthogonality conditions:

k = 0 n μ i p i ( k ) p ( k ) = v v i δ i , ( 8 ) {\displaystyle \sum _{k=0}^{n}\mu _{i}p_{i}(k)p_{\ell }(k)=vv_{i}\delta _{i\ell },\quad (8)}
k = 0 n μ i q k ( i ) q ( i ) = v μ k δ k . ( 9 ) {\displaystyle \sum _{k=0}^{n}\mu _{i}q_{k}(i)q_{\ell }(i)=v\mu _{k}\delta _{k\ell }.\quad (9)}

Also

μ j p i ( j ) = v i q j ( i ) , i , j = 0 , , n . ( 10 ) {\displaystyle \mu _{j}p_{i}(j)=v_{i}q_{j}(i),\quad i,j=0,\ldots ,n.\quad (10)}

In matrix notation, these are

P T Δ μ P = v Δ v , ( 11 ) {\displaystyle P^{T}\Delta _{\mu }P=v\Delta _{v},\quad (11)}
Q T Δ v Q = v Δ μ , ( 12 ) {\displaystyle Q^{T}\Delta _{v}Q=v\Delta _{\mu },\quad (12)}

where Δ v = diag { v 0 , v 1 , , v n } , Δ μ = diag { μ 0 , μ 1 , , μ n } . {\displaystyle \Delta _{v}=\operatorname {diag} \{v_{0},v_{1},\ldots ,v_{n}\},\qquad \Delta _{\mu }=\operatorname {diag} \{\mu _{0},\mu _{1},\ldots ,\mu _{n}\}.}

Proof of theorem

The eigenvalues of D i D {\displaystyle D_{i}D_{\ell }} are p i ( k ) p ( k ) {\displaystyle p_{i}(k)p_{\ell }(k)} with multiplicities μ k {\displaystyle \mu _{k}} . This implies that

v v i δ i = trace D i D = k = 0 n μ i p i ( k ) p ( k ) , ( 13 ) {\displaystyle vv_{i}\delta _{i\ell }=\operatorname {trace} D_{i}D_{\ell }=\sum _{k=0}^{n}\mu _{i}p_{i}(k)p_{\ell }(k),\quad (13)}

which proves Equation ( 8 ) {\displaystyle \left(8\right)} and Equation ( 11 ) {\displaystyle \left(11\right)} ,

Q = v P 1 = Δ v 1 P T Δ μ , ( 14 ) {\displaystyle Q=vP^{-1}=\Delta _{v}^{-1}P^{T}\Delta _{\mu },\quad (14)}

which gives Equations ( 9 ) {\displaystyle (9)} , ( 10 ) {\displaystyle (10)} and ( 12 ) {\displaystyle (12)} . {\displaystyle \Box }

There is an analogy between extensions of association schemes and extensions of finite fields. The cases we are most interested in are those where the extended schemes are defined on the n {\displaystyle n} -th Cartesian power X = F n {\displaystyle X={\mathcal {F}}^{n}} of a set F {\displaystyle {\mathcal {F}}} on which a basic association scheme ( F , K ) {\displaystyle \left({\mathcal {F}},K\right)} is defined. A first association scheme defined on X = F n {\displaystyle X={\mathcal {F}}^{n}} is called the n {\displaystyle n} -th Kronecker power ( F , K ) n {\displaystyle \left({\mathcal {F}},K\right)_{\otimes }^{n}} of ( F , K ) {\displaystyle \left({\mathcal {F}},K\right)} . Next the extension is defined on the same set X = F n {\displaystyle X={\mathcal {F}}^{n}} by gathering classes of ( F , K ) n {\displaystyle \left({\mathcal {F}},K\right)_{\otimes }^{n}} . The Kronecker power corresponds to the polynomial ring F [ X ] {\displaystyle F\left[X\right]} first defined on a field F {\displaystyle \mathbb {F} } , while the extension scheme corresponds to the extension field obtained as a quotient. An example of such an extended scheme is the Hamming scheme.

Association schemes may be merged, but merging them leads to non-symmetric association schemes, whereas all usual codes are subgroups in symmetric Abelian schemes.[6][7][8]

See also

Notes

References

  • Bailey, Rosemary A. (2004), Association schemes: Designed experiments, algebra and combinatorics, Cambridge Studies in Advanced Mathematics, vol. 84, Cambridge University Press, p. 387, ISBN 978-0-521-82446-0, MR 2047311
  • Bannai, Eiichi; Ito, Tatsuro (1984), Algebraic combinatorics I: Association schemes, Menlo Park, CA: The Benjamin/Cummings Publishing Co., Inc., pp. xxiv+425, ISBN 0-8053-0490-8, MR 0882540
  • Bannai, Etsuko (2001), "Bose–Mesner algebras associated with four-weight spin models", Graphs and Combinatorics, 17 (4): 589–598, doi:10.1007/PL00007251, S2CID 41255028
  • Bose, R. C.; Mesner, D. M. (1959), "On linear associative algebras corresponding to association schemes of partially balanced designs", Annals of Mathematical Statistics, 30 (1): 21–38, doi:10.1214/aoms/1177706356, JSTOR 2237117, MR 0102157
  • Cameron, P. J.; van Lint, J. H. (1991), Designs, Graphs, Codes and their Links, Cambridge: Cambridge University Press, ISBN 0-521-42385-6
  • Camion, P. (1998), "Codes and association schemes: Basic properties of association schemes relevant to coding", in Pless, V. S.; Huffman, W. C. (eds.), Handbook of coding theory, The Netherlands: Elsevier
  • Delsarte, P.; Levenshtein, V. I. (1998), "Association schemes and coding theory", IEEE Transactions on Information Theory, 44 (6): 2477–2504, doi:10.1109/18.720545
  • MacWilliams, F. J.; Sloane, N. J. A. (1978), The theory of error-correcting codes, New York: Elsevier
  • Nomura, K. (1997), "An algebra associated with a spin model", Journal of Algebraic Combinatorics, 6 (1): 53–58, doi:10.1023/A:1008644201287
  • v
  • t
  • e
Scientific
methodTreatment
and blockingModels
and inferenceDesigns

Completely
randomized