Complex random vector

In probability theory and statistics, a complex random vector is typically a tuple of complex-valued random variables, and generally is a random variable taking values in a vector space over the field of complex numbers. If Z 1 , , Z n {\displaystyle Z_{1},\ldots ,Z_{n}} are complex-valued random variables, then the n-tuple ( Z 1 , , Z n ) {\displaystyle \left(Z_{1},\ldots ,Z_{n}\right)} is a complex random vector. Complex random variables can always be considered as pairs of real random vectors: their real and imaginary parts.

Some concepts of real random vectors have a straightforward generalization to complex random vectors. For example, the definition of the mean of a complex random vector. Other concepts are unique to complex random vectors.

Applications of complex random vectors are found in digital signal processing.

Part of a series on statistics
Probability theory
  • Probability
    • Axioms
  • Determinism
    • System
  • Indeterminism
  • Randomness
  • v
  • t
  • e

Definition

A complex random vector Z = ( Z 1 , , Z n ) T {\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{n})^{T}} on the probability space ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} is a function Z : Ω C n {\displaystyle \mathbf {Z} \colon \Omega \rightarrow \mathbb {C} ^{n}} such that the vector ( ( Z 1 ) , ( Z 1 ) , , ( Z n ) , ( Z n ) ) T {\displaystyle (\Re {(Z_{1})},\Im {(Z_{1})},\ldots ,\Re {(Z_{n})},\Im {(Z_{n})})^{T}} is a real random vector on ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} where ( z ) {\displaystyle \Re {(z)}} denotes the real part of z {\displaystyle z} and ( z ) {\displaystyle \Im {(z)}} denotes the imaginary part of z {\displaystyle z} .[1]: p. 292 

Cumulative distribution function

The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form P ( Z 1 + 3 i ) {\displaystyle P(Z\leq 1+3i)} make no sense. However expressions of the form P ( ( Z ) 1 , ( Z ) 3 ) {\displaystyle P(\Re {(Z)}\leq 1,\Im {(Z)}\leq 3)} make sense. Therefore, the cumulative distribution function F Z : C n [ 0 , 1 ] {\displaystyle F_{\mathbf {Z} }:\mathbb {C} ^{n}\mapsto [0,1]} of a random vector Z = ( Z 1 , . . . , Z n ) T {\displaystyle \mathbf {Z} =(Z_{1},...,Z_{n})^{T}} is defined as

F Z ( z ) = P ( ( Z 1 ) ( z 1 ) , ( Z 1 ) ( z 1 ) , , ( Z n ) ( z n ) , ( Z n ) ( z n ) ) {\displaystyle F_{\mathbf {Z} }(\mathbf {z} )=\operatorname {P} (\Re {(Z_{1})}\leq \Re {(z_{1})},\Im {(Z_{1})}\leq \Im {(z_{1})},\ldots ,\Re {(Z_{n})}\leq \Re {(z_{n})},\Im {(Z_{n})}\leq \Im {(z_{n})})}

(Eq.1)

where z = ( z 1 , . . . , z n ) T {\displaystyle \mathbf {z} =(z_{1},...,z_{n})^{T}} .

Expectation

As in the real case the expectation (also called expected value) of a complex random vector is taken component-wise.[1]: p. 293 

E [ Z ] = ( E [ Z 1 ] , , E [ Z n ] ) T {\displaystyle \operatorname {E} [\mathbf {Z} ]=(\operatorname {E} [Z_{1}],\ldots ,\operatorname {E} [Z_{n}])^{T}}

(Eq.2)

Covariance matrix and pseudo-covariance matrix

The covariance matrix (also called second central moment) K Z Z {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }} contains the covariances between all pairs of components. The covariance matrix of an n × 1 {\displaystyle n\times 1} random vector is an n × n {\displaystyle n\times n} matrix whose ( i , j ) {\displaystyle (i,j)} th element is the covariance between the i th and the j th random variables.[2]: p.372  Unlike in the case of real random variables, the covariance between two random variables involves the complex conjugate of one of the two. Thus the covariance matrix is a Hermitian matrix.[1]: p. 293 

K Z Z = cov [ Z , Z ] = E [ ( Z E [ Z ] ) ( Z E [ Z ] ) H ] = E [ Z Z H ] E [ Z ] E [ Z H ] {\displaystyle {\begin{aligned}&\operatorname {K} _{\mathbf {Z} \mathbf {Z} }=\operatorname {cov} [\mathbf {Z} ,\mathbf {Z} ]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ])}^{H}]=\operatorname {E} [\mathbf {Z} \mathbf {Z} ^{H}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {Z} ^{H}]\\[12pt]\end{aligned}}}

(Eq.3)
K Z Z = [ E [ ( Z 1 E [ Z 1 ] ) ( Z 1 E [ Z 1 ] ) ¯ ] E [ ( Z 1 E [ Z 1 ] ) ( Z 2 E [ Z 2 ] ) ¯ ] E [ ( Z 1 E [ Z 1 ] ) ( Z n E [ Z n ] ) ¯ ] E [ ( Z 2 E [ Z 2 ] ) ( Z 1 E [ Z 1 ] ) ¯ ] E [ ( Z 2 E [ Z 2 ] ) ( Z 2 E [ Z 2 ] ) ¯ ] E [ ( Z 2 E [ Z 2 ] ) ( Z n E [ Z n ] ) ¯ ] E [ ( Z n E [ Z n ] ) ( Z 1 E [ Z 1 ] ) ¯ ] E [ ( Z n E [ Z n ] ) ( Z 2 E [ Z 2 ] ) ¯ ] E [ ( Z n E [ Z n ] ) ( Z n E [ Z n ] ) ¯ ] ] {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(Z_{1}-\operatorname {E} [Z_{1}])}}]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(Z_{2}-\operatorname {E} [Z_{2}])}}]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(Z_{n}-\operatorname {E} [Z_{n}])}}]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(Z_{1}-\operatorname {E} [Z_{1}])}}]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(Z_{2}-\operatorname {E} [Z_{2}])}}]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(Z_{n}-\operatorname {E} [Z_{n}])}}]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(Z_{1}-\operatorname {E} [Z_{1}])}}]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(Z_{2}-\operatorname {E} [Z_{2}])}}]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(Z_{n}-\operatorname {E} [Z_{n}])}}]\end{bmatrix}}}

The pseudo-covariance matrix (also called relation matrix) is defined replacing Hermitian transposition by transposition in the definition above.

J Z Z = cov [ Z , Z ¯ ] = E [ ( Z E [ Z ] ) ( Z E [ Z ] ) T ] = E [ Z Z T ] E [ Z ] E [ Z T ] {\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {Z} }=\operatorname {cov} [\mathbf {Z} ,{\overline {\mathbf {Z} }}]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ])}^{T}]=\operatorname {E} [\mathbf {Z} \mathbf {Z} ^{T}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {Z} ^{T}]}

(Eq.4)
J Z Z = [ E [ ( Z 1 E [ Z 1 ] ) ( Z 1 E [ Z 1 ] ) ] E [ ( Z 1 E [ Z 1 ] ) ( Z 2 E [ Z 2 ] ) ] E [ ( Z 1 E [ Z 1 ] ) ( Z n E [ Z n ] ) ] E [ ( Z 2 E [ Z 2 ] ) ( Z 1 E [ Z 1 ] ) ] E [ ( Z 2 E [ Z 2 ] ) ( Z 2 E [ Z 2 ] ) ] E [ ( Z 2 E [ Z 2 ] ) ( Z n E [ Z n ] ) ] E [ ( Z n E [ Z n ] ) ( Z 1 E [ Z 1 ] ) ] E [ ( Z n E [ Z n ] ) ( Z 2 E [ Z 2 ] ) ] E [ ( Z n E [ Z n ] ) ( Z n E [ Z n ] ) ] ] {\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {Z} }={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(Z_{1}-\operatorname {E} [Z_{1}])]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(Z_{2}-\operatorname {E} [Z_{2}])]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(Z_{n}-\operatorname {E} [Z_{n}])]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(Z_{1}-\operatorname {E} [Z_{1}])]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(Z_{2}-\operatorname {E} [Z_{2}])]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(Z_{n}-\operatorname {E} [Z_{n}])]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(Z_{1}-\operatorname {E} [Z_{1}])]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(Z_{2}-\operatorname {E} [Z_{2}])]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(Z_{n}-\operatorname {E} [Z_{n}])]\end{bmatrix}}}
Properties

The covariance matrix is a hermitian matrix, i.e.[1]: p. 293 

K Z Z H = K Z Z {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }^{H}=\operatorname {K} _{\mathbf {Z} \mathbf {Z} }} .

The pseudo-covariance matrix is a symmetric matrix, i.e.

J Z Z T = J Z Z {\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {Z} }^{T}=\operatorname {J} _{\mathbf {Z} \mathbf {Z} }} .

The covariance matrix is a positive semidefinite matrix, i.e.

a H K Z Z a 0 for all  a C n {\displaystyle \mathbf {a} ^{H}\operatorname {K} _{\mathbf {Z} \mathbf {Z} }\mathbf {a} \geq 0\quad {\text{for all }}\mathbf {a} \in \mathbb {C} ^{n}} .

Covariance matrices of real and imaginary parts

By decomposing the random vector Z {\displaystyle \mathbf {Z} } into its real part X = ( Z ) {\displaystyle \mathbf {X} =\Re {(\mathbf {Z} )}} and imaginary part Y = ( Z ) {\displaystyle \mathbf {Y} =\Im {(\mathbf {Z} )}} (i.e. Z = X + i Y {\displaystyle \mathbf {Z} =\mathbf {X} +i\mathbf {Y} } ), the pair ( X , Y ) {\displaystyle (\mathbf {X} ,\mathbf {Y} )} has a covariance matrix of the form:

[ K X X K Y X K X Y K Y Y ] {\displaystyle {\begin{bmatrix}\operatorname {K} _{\mathbf {X} \mathbf {X} }&\operatorname {K} _{\mathbf {Y} \mathbf {X} }\\\operatorname {K} _{\mathbf {X} \mathbf {Y} }&\operatorname {K} _{\mathbf {Y} \mathbf {Y} }\end{bmatrix}}}

The matrices K Z Z {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }} and J Z Z {\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {Z} }} can be related to the covariance matrices of X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } via the following expressions:

K X X = E [ ( X E [ X ] ) ( X E [ X ] ) T ] = 1 2 Re ( K Z Z + J Z Z ) K Y Y = E [ ( Y E [ Y ] ) ( Y E [ Y ] ) T ] = 1 2 Re ( K Z Z J Z Z ) K Y X = E [ ( Y E [ Y ] ) ( X E [ X ] ) T ] = 1 2 Im ( J Z Z + K Z Z ) K X Y = E [ ( X E [ X ] ) ( Y E [ Y ] ) T ] = 1 2 Im ( J Z Z K Z Z ) {\displaystyle {\begin{aligned}&\operatorname {K} _{\mathbf {X} \mathbf {X} }=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Re} (\operatorname {K} _{\mathbf {Z} \mathbf {Z} }+\operatorname {J} _{\mathbf {Z} \mathbf {Z} })\\&\operatorname {K} _{\mathbf {Y} \mathbf {Y} }=\operatorname {E} [(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Re} (\operatorname {K} _{\mathbf {Z} \mathbf {Z} }-\operatorname {J} _{\mathbf {Z} \mathbf {Z} })\\&\operatorname {K} _{\mathbf {Y} \mathbf {X} }=\operatorname {E} [(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Im} (\operatorname {J} _{\mathbf {Z} \mathbf {Z} }+\operatorname {K} _{\mathbf {Z} \mathbf {Z} })\\&\operatorname {K} _{\mathbf {X} \mathbf {Y} }=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Im} (\operatorname {J} _{\mathbf {Z} \mathbf {Z} }-\operatorname {K} _{\mathbf {Z} \mathbf {Z} })\\\end{aligned}}}

Conversely:

K Z Z = K X X + K Y Y + i ( K Y X K X Y ) J Z Z = K X X K Y Y + i ( K Y X + K X Y ) {\displaystyle {\begin{aligned}&\operatorname {K} _{\mathbf {Z} \mathbf {Z} }=\operatorname {K} _{\mathbf {X} \mathbf {X} }+\operatorname {K} _{\mathbf {Y} \mathbf {Y} }+i(\operatorname {K} _{\mathbf {Y} \mathbf {X} }-\operatorname {K} _{\mathbf {X} \mathbf {Y} })\\&\operatorname {J} _{\mathbf {Z} \mathbf {Z} }=\operatorname {K} _{\mathbf {X} \mathbf {X} }-\operatorname {K} _{\mathbf {Y} \mathbf {Y} }+i(\operatorname {K} _{\mathbf {Y} \mathbf {X} }+\operatorname {K} _{\mathbf {X} \mathbf {Y} })\end{aligned}}}

Cross-covariance matrix and pseudo-cross-covariance matrix

The cross-covariance matrix between two complex random vectors Z , W {\displaystyle \mathbf {Z} ,\mathbf {W} } is defined as:

K Z W = cov [ Z , W ] = E [ ( Z E [ Z ] ) ( W E [ W ] ) H ] = E [ Z W H ] E [ Z ] E [ W H ] {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }=\operatorname {cov} [\mathbf {Z} ,\mathbf {W} ]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {W} -\operatorname {E} [\mathbf {W} ])}^{H}]=\operatorname {E} [\mathbf {Z} \mathbf {W} ^{H}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {W} ^{H}]}

(Eq.5)
K Z W = [ E [ ( Z 1 E [ Z 1 ] ) ( W 1 E [ W 1 ] ) ¯ ] E [ ( Z 1 E [ Z 1 ] ) ( W 2 E [ W 2 ] ) ¯ ] E [ ( Z 1 E [ Z 1 ] ) ( W n E [ W n ] ) ¯ ] E [ ( Z 2 E [ Z 2 ] ) ( W 1 E [ W 1 ] ) ¯ ] E [ ( Z 2 E [ Z 2 ] ) ( W 2 E [ W 2 ] ) ¯ ] E [ ( Z 2 E [ Z 2 ] ) ( W n E [ W n ] ) ¯ ] E [ ( Z n E [ Z n ] ) ( W 1 E [ W 1 ] ) ¯ ] E [ ( Z n E [ Z n ] ) ( W 2 E [ W 2 ] ) ¯ ] E [ ( Z n E [ Z n ] ) ( W n E [ W n ] ) ¯ ] ] {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(W_{1}-\operatorname {E} [W_{1}])}}]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(W_{2}-\operatorname {E} [W_{2}])}}]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(W_{n}-\operatorname {E} [W_{n}])}}]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(W_{1}-\operatorname {E} [W_{1}])}}]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(W_{2}-\operatorname {E} [W_{2}])}}]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(W_{n}-\operatorname {E} [W_{n}])}}]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(W_{1}-\operatorname {E} [W_{1}])}}]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(W_{2}-\operatorname {E} [W_{2}])}}]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(W_{n}-\operatorname {E} [W_{n}])}}]\end{bmatrix}}}

And the pseudo-cross-covariance matrix is defined as:

J Z W = cov [ Z , W ¯ ] = E [ ( Z E [ Z ] ) ( W E [ W ] ) T ] = E [ Z W T ] E [ Z ] E [ W T ] {\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {W} }=\operatorname {cov} [\mathbf {Z} ,{\overline {\mathbf {W} }}]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {W} -\operatorname {E} [\mathbf {W} ])}^{T}]=\operatorname {E} [\mathbf {Z} \mathbf {W} ^{T}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {W} ^{T}]}

(Eq.6)
J Z W = [ E [ ( Z 1 E [ Z 1 ] ) ( W 1 E [ W 1 ] ) ] E [ ( Z 1 E [ Z 1 ] ) ( W 2 E [ W 2 ] ) ] E [ ( Z 1 E [ Z 1 ] ) ( W n E [ W n ] ) ] E [ ( Z 2 E [ Z 2 ] ) ( W 1 E [ W 1 ] ) ] E [ ( Z 2 E [ Z 2 ] ) ( W 2 E [ W 2 ] ) ] E [ ( Z 2 E [ Z 2 ] ) ( W n E [ W n ] ) ] E [ ( Z n E [ Z n ] ) ( W 1 E [ W 1 ] ) ] E [ ( Z n E [ Z n ] ) ( W 2 E [ W 2 ] ) ] E [ ( Z n E [ Z n ] ) ( W n E [ W n ] ) ] ] {\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {W} }={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(W_{1}-\operatorname {E} [W_{1}])]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(W_{2}-\operatorname {E} [W_{2}])]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(W_{n}-\operatorname {E} [W_{n}])]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(W_{1}-\operatorname {E} [W_{1}])]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(W_{2}-\operatorname {E} [W_{2}])]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(W_{n}-\operatorname {E} [W_{n}])]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(W_{1}-\operatorname {E} [W_{1}])]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(W_{2}-\operatorname {E} [W_{2}])]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(W_{n}-\operatorname {E} [W_{n}])]\end{bmatrix}}}

Two complex random vectors Z {\displaystyle \mathbf {Z} } and W {\displaystyle \mathbf {W} } are called uncorrelated if

K Z W = J Z W = 0 {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }=\operatorname {J} _{\mathbf {Z} \mathbf {W} }=0} .

Independence

Two complex random vectors Z = ( Z 1 , . . . , Z m ) T {\displaystyle \mathbf {Z} =(Z_{1},...,Z_{m})^{T}} and W = ( W 1 , . . . , W n ) T {\displaystyle \mathbf {W} =(W_{1},...,W_{n})^{T}} are called independent if

F Z , W ( z , w ) = F Z ( z ) F W ( w ) for all  z , w {\displaystyle F_{\mathbf {Z,W} }(\mathbf {z,w} )=F_{\mathbf {Z} }(\mathbf {z} )\cdot F_{\mathbf {W} }(\mathbf {w} )\quad {\text{for all }}\mathbf {z} ,\mathbf {w} }

(Eq.7)

where F Z ( z ) {\displaystyle F_{\mathbf {Z} }(\mathbf {z} )} and F W ( w ) {\displaystyle F_{\mathbf {W} }(\mathbf {w} )} denote the cumulative distribution functions of Z {\displaystyle \mathbf {Z} } and W {\displaystyle \mathbf {W} } as defined in Eq.1 and F Z , W ( z , w ) {\displaystyle F_{\mathbf {Z,W} }(\mathbf {z,w} )} denotes their joint cumulative distribution function. Independence of Z {\displaystyle \mathbf {Z} } and W {\displaystyle \mathbf {W} } is often denoted by Z W {\displaystyle \mathbf {Z} \perp \!\!\!\perp \mathbf {W} } . Written component-wise, Z {\displaystyle \mathbf {Z} } and W {\displaystyle \mathbf {W} } are called independent if

F Z 1 , , Z m , W 1 , , W n ( z 1 , , z m , w 1 , , w n ) = F Z 1 , , Z m ( z 1 , , z m ) F W 1 , , W n ( w 1 , , w n ) for all  z 1 , , z m , w 1 , , w n {\displaystyle F_{Z_{1},\ldots ,Z_{m},W_{1},\ldots ,W_{n}}(z_{1},\ldots ,z_{m},w_{1},\ldots ,w_{n})=F_{Z_{1},\ldots ,Z_{m}}(z_{1},\ldots ,z_{m})\cdot F_{W_{1},\ldots ,W_{n}}(w_{1},\ldots ,w_{n})\quad {\text{for all }}z_{1},\ldots ,z_{m},w_{1},\ldots ,w_{n}} .

Circular symmetry

A complex random vector Z {\displaystyle \mathbf {Z} } is called circularly symmetric if for every deterministic φ [ π , π ) {\displaystyle \varphi \in [-\pi ,\pi )} the distribution of e i φ Z {\displaystyle e^{\mathrm {i} \varphi }\mathbf {Z} } equals the distribution of Z {\displaystyle \mathbf {Z} } .[3]: pp. 500–501 

Properties
  • The expectation of a circularly symmetric complex random vector is either zero or it is not defined.[3]: p. 500 
  • The pseudo-covariance matrix of a circularly symmetric complex random vector is zero.[3]: p. 584 

Proper complex random vectors

A complex random vector Z {\displaystyle \mathbf {Z} } is called proper if the following three conditions are all satisfied:[1]: p. 293 

  • E [ Z ] = 0 {\displaystyle \operatorname {E} [\mathbf {Z} ]=0} (zero mean)
  • var [ Z 1 ] < , , var [ Z n ] < {\displaystyle \operatorname {var} [Z_{1}]<\infty ,\ldots ,\operatorname {var} [Z_{n}]<\infty } (all components have finite variance)
  • E [ Z Z T ] = 0 {\displaystyle \operatorname {E} [\mathbf {Z} \mathbf {Z} ^{T}]=0}

Two complex random vectors Z , W {\displaystyle \mathbf {Z} ,\mathbf {W} } are called jointly proper is the composite random vector ( Z 1 , Z 2 , , Z m , W 1 , W 2 , , W n ) T {\displaystyle (Z_{1},Z_{2},\ldots ,Z_{m},W_{1},W_{2},\ldots ,W_{n})^{T}} is proper.

Properties
  • A complex random vector Z {\displaystyle \mathbf {Z} } is proper if, and only if, for all (deterministic) vectors c C n {\displaystyle \mathbf {c} \in \mathbb {C} ^{n}} the complex random variable c T Z {\displaystyle \mathbf {c} ^{T}\mathbf {Z} } is proper.[1]: p. 293 
  • Linear transformations of proper complex random vectors are proper, i.e. if Z {\displaystyle \mathbf {Z} } is a proper random vectors with n {\displaystyle n} components and A {\displaystyle A} is a deterministic m × n {\displaystyle m\times n} matrix, then the complex random vector A Z {\displaystyle A\mathbf {Z} } is also proper.[1]: p. 295 
  • Every circularly symmetric complex random vector with finite variance of all its components is proper.[1]: p. 295 
  • There are proper complex random vectors that are not circularly symmetric.[1]: p. 504 
  • A real random vector is proper if and only if it is constant.
  • Two jointly proper complex random vectors are uncorrelated if and only if their covariance matrix is zero, i.e. if K Z W = 0 {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }=0} .

Cauchy-Schwarz inequality

The Cauchy-Schwarz inequality for complex random vectors is

| E [ Z H W ] | 2 E [ Z H Z ] E [ | W H W | ] {\displaystyle \left|\operatorname {E} [\mathbf {Z} ^{H}\mathbf {W} ]\right|^{2}\leq \operatorname {E} [\mathbf {Z} ^{H}\mathbf {Z} ]\operatorname {E} [|\mathbf {W} ^{H}\mathbf {W} |]} .

Characteristic function

The characteristic function of a complex random vector Z {\displaystyle \mathbf {Z} } with n {\displaystyle n} components is a function C n C {\displaystyle \mathbb {C} ^{n}\to \mathbb {C} } defined by:[1]: p. 295 

φ Z ( ω ) = E [ e i ( ω H Z ) ] = E [ e i ( ( ω 1 ) ( Z 1 ) + ( ω 1 ) ( Z 1 ) + + ( ω n ) ( Z n ) + ( ω n ) ( Z n ) ) ] {\displaystyle \varphi _{\mathbf {Z} }(\mathbf {\omega } )=\operatorname {E} \left[e^{i\Re {(\mathbf {\omega } ^{H}\mathbf {Z} )}}\right]=\operatorname {E} \left[e^{i(\Re {(\omega _{1})}\Re {(Z_{1})}+\Im {(\omega _{1})}\Im {(Z_{1})}+\cdots +\Re {(\omega _{n})}\Re {(Z_{n})}+\Im {(\omega _{n})}\Im {(Z_{n})})}\right]}

See also

References

  1. ^ a b c d e f g h i j Lapidoth, Amos (2009). A Foundation in Digital Communication. Cambridge University Press. ISBN 978-0-521-19395-5.
  2. ^ Gubner, John A. (2006). Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press. ISBN 978-0-521-86470-1.
  3. ^ a b c Tse, David (2005). Fundamentals of Wireless Communication. Cambridge University Press.