Multilinear form

Map from multiple vectors to an underlying field of scalars, linear in each argument

In abstract algebra and multilinear algebra, a multilinear form on a vector space V {\displaystyle V} over a field K {\displaystyle K} is a map

f : V k K {\displaystyle f\colon V^{k}\to K}

that is separately K {\displaystyle K} -linear in each of its k {\displaystyle k} arguments.[1] More generally, one can define multilinear forms on a module over a commutative ring. The rest of this article, however, will only consider multilinear forms on finite-dimensional vector spaces.

A multilinear k {\displaystyle k} -form on V {\displaystyle V} over R {\displaystyle \mathbb {R} } is called a (covariant) k {\displaystyle {\boldsymbol {k}}} -tensor, and the vector space of such forms is usually denoted T k ( V ) {\displaystyle {\mathcal {T}}^{k}(V)} or L k ( V ) {\displaystyle {\mathcal {L}}^{k}(V)} .[2]

Tensor product

Given a k {\displaystyle k} -tensor f T k ( V ) {\displaystyle f\in {\mathcal {T}}^{k}(V)} and an {\displaystyle \ell } -tensor g T ( V ) {\displaystyle g\in {\mathcal {T}}^{\ell }(V)} , a product f g T k + ( V ) {\displaystyle f\otimes g\in {\mathcal {T}}^{k+\ell }(V)} , known as the tensor product, can be defined by the property

( f g ) ( v 1 , , v k , v k + 1 , , v k + ) = f ( v 1 , , v k ) g ( v k + 1 , , v k + ) , {\displaystyle (f\otimes g)(v_{1},\ldots ,v_{k},v_{k+1},\ldots ,v_{k+\ell })=f(v_{1},\ldots ,v_{k})g(v_{k+1},\ldots ,v_{k+\ell }),}

for all v 1 , , v k + V {\displaystyle v_{1},\ldots ,v_{k+\ell }\in V} . The tensor product of multilinear forms is not commutative; however it is bilinear and associative:

f ( a g 1 + b g 2 ) = a ( f g 1 ) + b ( f g 2 ) {\displaystyle f\otimes (ag_{1}+bg_{2})=a(f\otimes g_{1})+b(f\otimes g_{2})} , ( a f 1 + b f 2 ) g = a ( f 1 g ) + b ( f 2 g ) , {\displaystyle (af_{1}+bf_{2})\otimes g=a(f_{1}\otimes g)+b(f_{2}\otimes g),}

and

( f g ) h = f ( g h ) . {\displaystyle (f\otimes g)\otimes h=f\otimes (g\otimes h).}

If ( v 1 , , v n ) {\displaystyle (v_{1},\ldots ,v_{n})} forms a basis for an n {\displaystyle n} -dimensional vector space V {\displaystyle V} and ( ϕ 1 , , ϕ n ) {\displaystyle (\phi ^{1},\ldots ,\phi ^{n})} is the corresponding dual basis for the dual space V = T 1 ( V ) {\displaystyle V^{*}={\mathcal {T}}^{1}(V)} , then the products ϕ i 1 ϕ i k {\displaystyle \phi ^{i_{1}}\otimes \cdots \otimes \phi ^{i_{k}}} , with 1 i 1 , , i k n {\displaystyle 1\leq i_{1},\ldots ,i_{k}\leq n} form a basis for T k ( V ) {\displaystyle {\mathcal {T}}^{k}(V)} . Consequently, T k ( V ) {\displaystyle {\mathcal {T}}^{k}(V)} has dimension n k {\displaystyle n^{k}} .

Examples

Bilinear forms

If k = 2 {\displaystyle k=2} , f : V × V K {\displaystyle f:V\times V\to K} is referred to as a bilinear form. A familiar and important example of a (symmetric) bilinear form is the standard inner product (dot product) of vectors.

Alternating multilinear forms

An important class of multilinear forms are the alternating multilinear forms, which have the additional property that[3]

f ( x σ ( 1 ) , , x σ ( k ) ) = sgn ( σ ) f ( x 1 , , x k ) , {\displaystyle f(x_{\sigma (1)},\ldots ,x_{\sigma (k)})=\operatorname {sgn}(\sigma )f(x_{1},\ldots ,x_{k}),}

where σ : N k N k {\displaystyle \sigma :\mathbf {N} _{k}\to \mathbf {N} _{k}} is a permutation and sgn ( σ ) {\displaystyle \operatorname {sgn}(\sigma )} denotes its sign (+1 if even, –1 if odd). As a consequence, alternating multilinear forms are antisymmetric with respect to swapping of any two arguments (i.e., σ ( p ) = q , σ ( q ) = p {\displaystyle \sigma (p)=q,\sigma (q)=p} and σ ( i ) = i , 1 i k , i p , q {\displaystyle \sigma (i)=i,1\leq i\leq k,i\neq p,q} ):

f ( x 1 , , x p , , x q , , x k ) = f ( x 1 , , x q , , x p , , x k ) . {\displaystyle f(x_{1},\ldots ,x_{p},\ldots ,x_{q},\ldots ,x_{k})=-f(x_{1},\ldots ,x_{q},\ldots ,x_{p},\ldots ,x_{k}).}

With the additional hypothesis that the characteristic of the field K {\displaystyle K} is not 2, setting x p = x q = x {\displaystyle x_{p}=x_{q}=x} implies as a corollary that f ( x 1 , , x , , x , , x k ) = 0 {\displaystyle f(x_{1},\ldots ,x,\ldots ,x,\ldots ,x_{k})=0} ; that is, the form has a value of 0 whenever two of its arguments are equal. Note, however, that some authors[4] use this last condition as the defining property of alternating forms. This definition implies the property given at the beginning of the section, but as noted above, the converse implication holds only when char ( K ) 2 {\displaystyle \operatorname {char} (K)\neq 2} .

An alternating multilinear k {\displaystyle k} -form on V {\displaystyle V} over R {\displaystyle \mathbb {R} } is called a multicovector of degree k {\displaystyle {\boldsymbol {k}}} or k {\displaystyle {\boldsymbol {k}}} -covector, and the vector space of such alternating forms, a subspace of T k ( V ) {\displaystyle {\mathcal {T}}^{k}(V)} , is generally denoted A k ( V ) {\displaystyle {\mathcal {A}}^{k}(V)} , or, using the notation for the isomorphic kth exterior power of V {\displaystyle V^{*}} (the dual space of V {\displaystyle V} ), k V {\textstyle \bigwedge ^{k}V^{*}} .[5] Note that linear functionals (multilinear 1-forms over R {\displaystyle \mathbb {R} } ) are trivially alternating, so that A 1 ( V ) = T 1 ( V ) = V {\displaystyle {\mathcal {A}}^{1}(V)={\mathcal {T}}^{1}(V)=V^{*}} , while, by convention, 0-forms are defined to be scalars: A 0 ( V ) = T 0 ( V ) = R {\displaystyle {\mathcal {A}}^{0}(V)={\mathcal {T}}^{0}(V)=\mathbb {R} } .

The determinant on n × n {\displaystyle n\times n} matrices, viewed as an n {\displaystyle n} argument function of the column vectors, is an important example of an alternating multilinear form.

Exterior product

The tensor product of alternating multilinear forms is, in general, no longer alternating. However, by summing over all permutations of the tensor product, taking into account the parity of each term, the exterior product ( {\displaystyle \wedge } , also known as the wedge product) of multicovectors can be defined, so that if f A k ( V ) {\displaystyle f\in {\mathcal {A}}^{k}(V)} and g A ( V ) {\displaystyle g\in {\mathcal {A}}^{\ell }(V)} , then f g A k + ( V ) {\displaystyle f\wedge g\in {\mathcal {A}}^{k+\ell }(V)} :

( f g ) ( v 1 , , v k + ) = 1 k ! ! σ S k + ( sgn ( σ ) ) f ( v σ ( 1 ) , , v σ ( k ) ) g ( v σ ( k + 1 ) , , v σ ( k + ) ) , {\displaystyle (f\wedge g)(v_{1},\ldots ,v_{k+\ell })={\frac {1}{k!\ell !}}\sum _{\sigma \in S_{k+\ell }}(\operatorname {sgn}(\sigma ))f(v_{\sigma (1)},\ldots ,v_{\sigma (k)})g(v_{\sigma (k+1)},\ldots ,v_{\sigma (k+\ell )}),}

where the sum is taken over the set of all permutations over k + {\displaystyle k+\ell } elements, S k + {\displaystyle S_{k+\ell }} . The exterior product is bilinear, associative, and graded-alternating: if f A k ( V ) {\displaystyle f\in {\mathcal {A}}^{k}(V)} and g A ( V ) {\displaystyle g\in {\mathcal {A}}^{\ell }(V)} then f g = ( 1 ) k g f {\displaystyle f\wedge g=(-1)^{k\ell }g\wedge f} .

Given a basis ( v 1 , , v n ) {\displaystyle (v_{1},\ldots ,v_{n})} for V {\displaystyle V} and dual basis ( ϕ 1 , , ϕ n ) {\displaystyle (\phi ^{1},\ldots ,\phi ^{n})} for V = A 1 ( V ) {\displaystyle V^{*}={\mathcal {A}}^{1}(V)} , the exterior products ϕ i 1 ϕ i k {\displaystyle \phi ^{i_{1}}\wedge \cdots \wedge \phi ^{i_{k}}} , with 1 i 1 < < i k n {\displaystyle 1\leq i_{1}<\cdots <i_{k}\leq n} form a basis for A k ( V ) {\displaystyle {\mathcal {A}}^{k}(V)} . Hence, the dimension of A k ( V ) {\displaystyle {\mathcal {A}}^{k}(V)} for n-dimensional V {\displaystyle V} is ( n k ) = n ! ( n k ) ! k ! {\textstyle {\tbinom {n}{k}}={\frac {n!}{(n-k)!\,k!}}} .

Differential forms

Differential forms are mathematical objects constructed via tangent spaces and multilinear forms that behave, in many ways, like differentials in the classical sense. Though conceptually and computationally useful, differentials are founded on ill-defined notions of infinitesimal quantities developed early in the history of calculus. Differential forms provide a mathematically rigorous and precise framework to modernize this long-standing idea. Differential forms are especially useful in multivariable calculus (analysis) and differential geometry because they possess transformation properties that allow them be integrated on curves, surfaces, and their higher-dimensional analogues (differentiable manifolds). One far-reaching application is the modern statement of Stokes' theorem, a sweeping generalization of the fundamental theorem of calculus to higher dimensions.

The synopsis below is primarily based on Spivak (1965)[6] and Tu (2011).[3]

Definition of differential k-forms and construction of 1-forms

To define differential forms on open subsets U R n {\displaystyle U\subset \mathbb {R} ^{n}} , we first need the notion of the tangent space of R n {\displaystyle \mathbb {R} ^{n}} at p {\displaystyle p} , usually denoted T p R n {\displaystyle T_{p}\mathbb {R} ^{n}} or R p n {\displaystyle \mathbb {R} _{p}^{n}} . The vector space R p n {\displaystyle \mathbb {R} _{p}^{n}} can be defined most conveniently as the set of elements v p {\displaystyle v_{p}} ( v R n {\displaystyle v\in \mathbb {R} ^{n}} , with p R n {\displaystyle p\in \mathbb {R} ^{n}} fixed) with vector addition and scalar multiplication defined by v p + w p := ( v + w ) p {\displaystyle v_{p}+w_{p}:=(v+w)_{p}} and a ( v p ) := ( a v ) p {\displaystyle a\cdot (v_{p}):=(a\cdot v)_{p}} , respectively. Moreover, if ( e 1 , , e n ) {\displaystyle (e_{1},\ldots ,e_{n})} is the standard basis for R n {\displaystyle \mathbb {R} ^{n}} , then ( ( e 1 ) p , , ( e n ) p ) {\displaystyle ((e_{1})_{p},\ldots ,(e_{n})_{p})} is the analogous standard basis for R p n {\displaystyle \mathbb {R} _{p}^{n}} . In other words, each tangent space R p n {\displaystyle \mathbb {R} _{p}^{n}} can simply be regarded as a copy of R n {\displaystyle \mathbb {R} ^{n}} (a set of tangent vectors) based at the point p {\displaystyle p} . The collection (disjoint union) of tangent spaces of R n {\displaystyle \mathbb {R} ^{n}} at all p R n {\displaystyle p\in \mathbb {R} ^{n}} is known as the tangent bundle of R n {\displaystyle \mathbb {R} ^{n}} and is usually denoted T R n := p R n R p n {\textstyle T\mathbb {R} ^{n}:=\bigcup _{p\in \mathbb {R} ^{n}}\mathbb {R} _{p}^{n}} . While the definition given here provides a simple description of the tangent space of R n {\displaystyle \mathbb {R} ^{n}} , there are other, more sophisticated constructions that are better suited for defining the tangent spaces of smooth manifolds in general (see the article on tangent spaces for details).

A differential k {\displaystyle {\boldsymbol {k}}} -form on U R n {\displaystyle U\subset \mathbb {R} ^{n}} is defined as a function ω {\displaystyle \omega } that assigns to every p U {\displaystyle p\in U} a k {\displaystyle k} -covector on the tangent space of R n {\displaystyle \mathbb {R} ^{n}} at p {\displaystyle p} , usually denoted ω p := ω ( p ) A k ( R p n ) {\displaystyle \omega _{p}:=\omega (p)\in {\mathcal {A}}^{k}(\mathbb {R} _{p}^{n})} . In brief, a differential k {\displaystyle k} -form is a k {\displaystyle k} -covector field. The space of k {\displaystyle k} -forms on U {\displaystyle U} is usually denoted Ω k ( U ) {\displaystyle \Omega ^{k}(U)} ; thus if ω {\displaystyle \omega } is a differential k {\displaystyle k} -form, we write ω Ω k ( U ) {\displaystyle \omega \in \Omega ^{k}(U)} . By convention, a continuous function on U {\displaystyle U} is a differential 0-form: f C 0 ( U ) = Ω 0 ( U ) {\displaystyle f\in C^{0}(U)=\Omega ^{0}(U)} .

We first construct differential 1-forms from 0-forms and deduce some of their basic properties. To simplify the discussion below, we will only consider smooth differential forms constructed from smooth ( C {\displaystyle C^{\infty }} ) functions. Let f : R n R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } be a smooth function. We define the 1-form d f {\displaystyle df} on U {\displaystyle U} for p U {\displaystyle p\in U} and v p R p n {\displaystyle v_{p}\in \mathbb {R} _{p}^{n}} by ( d f ) p ( v p ) := D f | p ( v ) {\displaystyle (df)_{p}(v_{p}):=Df|_{p}(v)} , where D f | p : R n R {\displaystyle Df|_{p}:\mathbb {R} ^{n}\to \mathbb {R} } is the total derivative of f {\displaystyle f} at p {\displaystyle p} . (Recall that the total derivative is a linear transformation.) Of particular interest are the projection maps (also known as coordinate functions) π i : R n R {\displaystyle \pi ^{i}:\mathbb {R} ^{n}\to \mathbb {R} } , defined by x x i {\displaystyle x\mapsto x^{i}} , where x i {\displaystyle x^{i}} is the ith standard coordinate of x R n {\displaystyle x\in \mathbb {R} ^{n}} . The 1-forms d π i {\displaystyle d\pi ^{i}} are known as the basic 1-forms; they are conventionally denoted d x i {\displaystyle dx^{i}} . If the standard coordinates of v p R p n {\displaystyle v_{p}\in \mathbb {R} _{p}^{n}} are ( v 1 , , v n ) {\displaystyle (v^{1},\ldots ,v^{n})} , then application of the definition of d f {\displaystyle df} yields d x p i ( v p ) = v i {\displaystyle dx_{p}^{i}(v_{p})=v^{i}} , so that d x p i ( ( e j ) p ) = δ j i {\displaystyle dx_{p}^{i}((e_{j})_{p})=\delta _{j}^{i}} , where δ j i {\displaystyle \delta _{j}^{i}} is the Kronecker delta.[7] Thus, as the dual of the standard basis for R p n {\displaystyle \mathbb {R} _{p}^{n}} , ( d x p 1 , , d x p n ) {\displaystyle (dx_{p}^{1},\ldots ,dx_{p}^{n})} forms a basis for A 1 ( R p n ) = ( R p n ) {\displaystyle {\mathcal {A}}^{1}(\mathbb {R} _{p}^{n})=(\mathbb {R} _{p}^{n})^{*}} . As a consequence, if ω {\displaystyle \omega } is a 1-form on U {\displaystyle U} , then ω {\displaystyle \omega } can be written as a i d x i {\textstyle \sum a_{i}\,dx^{i}} for smooth functions a i : U R {\displaystyle a_{i}:U\to \mathbb {R} } . Furthermore, we can derive an expression for d f {\displaystyle df} that coincides with the classical expression for a total differential:

d f = i = 1 n D i f d x i = f x 1 d x 1 + + f x n d x n . {\displaystyle df=\sum _{i=1}^{n}D_{i}f\;dx^{i}={\partial f \over \partial x^{1}}\,dx^{1}+\cdots +{\partial f \over \partial x^{n}}\,dx^{n}.}

[Comments on notation: In this article, we follow the convention from tensor calculus and differential geometry in which multivectors and multicovectors are written with lower and upper indices, respectively. Since differential forms are multicovector fields, upper indices are employed to index them.[3] The opposite rule applies to the components of multivectors and multicovectors, which instead are written with upper and lower indices, respectively. For instance, we represent the standard coordinates of vector v R n {\displaystyle v\in \mathbb {R} ^{n}} as ( v 1 , , v n ) {\displaystyle (v^{1},\ldots ,v^{n})} , so that v = i = 1 n v i e i {\textstyle v=\sum _{i=1}^{n}v^{i}e_{i}} in terms of the standard basis ( e 1 , , e n ) {\displaystyle (e_{1},\ldots ,e_{n})} . In addition, superscripts appearing in the denominator of an expression (as in f x i {\textstyle {\frac {\partial f}{\partial x^{i}}}} ) are treated as lower indices in this convention. When indices are applied and interpreted in this manner, the number of upper indices minus the number of lower indices in each term of an expression is conserved, both within the sum and across an equal sign, a feature that serves as a useful mnemonic device and helps pinpoint errors made during manual computation.]

Basic operations on differential k-forms

The exterior product ( {\displaystyle \wedge } ) and exterior derivative ( d {\displaystyle d} ) are two fundamental operations on differential forms. The exterior product of a k {\displaystyle k} -form and an {\displaystyle \ell } -form is a ( k + ) {\displaystyle (k+\ell )} -form, while the exterior derivative of a k {\displaystyle k} -form is a ( k + 1 ) {\displaystyle (k+1)} -form. Thus, both operations generate differential forms of higher degree from those of lower degree.

The exterior product : Ω k ( U ) × Ω ( U ) Ω k + ( U ) {\displaystyle \wedge :\Omega ^{k}(U)\times \Omega ^{\ell }(U)\to \Omega ^{k+\ell }(U)} of differential forms is a special case of the exterior product of multicovectors in general (see above). As is true in general for the exterior product, the exterior product of differential forms is bilinear, associative, and is graded-alternating.

More concretely, if ω = a i 1 i k d x i 1 d x i k {\displaystyle \omega =a_{i_{1}\ldots i_{k}}\,dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}} and η = a j 1 i d x j 1 d x j {\displaystyle \eta =a_{j_{1}\ldots i_{\ell }}dx^{j_{1}}\wedge \cdots \wedge dx^{j_{\ell }}} , then

ω η = a i 1 i k a j 1 j d x i 1 d x i k d x j 1 d x j . {\displaystyle \omega \wedge \eta =a_{i_{1}\ldots i_{k}}a_{j_{1}\ldots j_{\ell }}\,dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}\wedge dx^{j_{1}}\wedge \cdots \wedge dx^{j_{\ell }}.}

Furthermore, for any set of indices { α 1 , α m } {\displaystyle \{\alpha _{1}\ldots ,\alpha _{m}\}} ,

d x α 1 d x α p d x α q d x α m = d x α 1 d x α q d x α p d x α m . {\displaystyle dx^{\alpha _{1}}\wedge \cdots \wedge dx^{\alpha _{p}}\wedge \cdots \wedge dx^{\alpha _{q}}\wedge \cdots \wedge dx^{\alpha _{m}}=-dx^{\alpha _{1}}\wedge \cdots \wedge dx^{\alpha _{q}}\wedge \cdots \wedge dx^{\alpha _{p}}\wedge \cdots \wedge dx^{\alpha _{m}}.}

If I = { i 1 , , i k } {\displaystyle I=\{i_{1},\ldots ,i_{k}\}} , J = { j 1 , , j } {\displaystyle J=\{j_{1},\ldots ,j_{\ell }\}} , and I J = {\displaystyle I\cap J=\varnothing } , then the indices of ω η {\displaystyle \omega \wedge \eta } can be arranged in ascending order by a (finite) sequence of such swaps. Since d x α d x α = 0 {\displaystyle dx^{\alpha }\wedge dx^{\alpha }=0} , I J {\displaystyle I\cap J\neq \varnothing } implies that ω η = 0 {\displaystyle \omega \wedge \eta =0} . Finally, as a consequence of bilinearity, if ω {\displaystyle \omega } and η {\displaystyle \eta } are the sums of several terms, their exterior product obeys distributivity with respect to each of these terms.

The collection of the exterior products of basic 1-forms { d x i 1 d x i k 1 i 1 < < i k n } {\displaystyle \{dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}\mid 1\leq i_{1}<\cdots <i_{k}\leq n\}} constitutes a basis for the space of differential k-forms. Thus, any ω Ω k ( U ) {\displaystyle \omega \in \Omega ^{k}(U)} can be written in the form

ω = i 1 < < i k a i 1 i k d x i 1 d x i k , ( ) {\displaystyle \omega =\sum _{i_{1}<\cdots <i_{k}}a_{i_{1}\ldots i_{k}}\,dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}},\qquad (*)}

where a i 1 i k : U R {\displaystyle a_{i_{1}\ldots i_{k}}:U\to \mathbb {R} } are smooth functions. With each set of indices { i 1 , , i k } {\displaystyle \{i_{1},\ldots ,i_{k}\}} placed in ascending order, (*) is said to be the standard presentation of ω {\displaystyle \omega } .

In the previous section, the 1-form d f {\displaystyle df} was defined by taking the exterior derivative of the 0-form (continuous function) f {\displaystyle f} . We now extend this by defining the exterior derivative operator d : Ω k ( U ) Ω k + 1 ( U ) {\displaystyle d:\Omega ^{k}(U)\to \Omega ^{k+1}(U)} for k 1 {\displaystyle k\geq 1} . If the standard presentation of k {\displaystyle k} -form ω {\displaystyle \omega } is given by (*), the ( k + 1 ) {\displaystyle (k+1)} -form d ω {\displaystyle d\omega } is defined by

d ω := i 1 < < i k d a i 1 i k d x i 1 d x i k . {\displaystyle d\omega :=\sum _{i_{1}<\ldots <i_{k}}da_{i_{1}\ldots i_{k}}\wedge dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}.}

A property of d {\displaystyle d} that holds for all smooth forms is that the second exterior derivative of any ω {\displaystyle \omega } vanishes identically: d 2 ω = d ( d ω ) 0 {\displaystyle d^{2}\omega =d(d\omega )\equiv 0} . This can be established directly from the definition of d {\displaystyle d} and the equality of mixed second-order partial derivatives of C 2 {\displaystyle C^{2}} functions (see the article on closed and exact forms for details).

Integration of differential forms and Stokes' theorem for chains

To integrate a differential form over a parameterized domain, we first need to introduce the notion of the pullback of a differential form. Roughly speaking, when a differential form is integrated, applying the pullback transforms it in a way that correctly accounts for a change-of-coordinates.

Given a differentiable function f : R n R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} and k {\displaystyle k} -form η Ω k ( R m ) {\displaystyle \eta \in \Omega ^{k}(\mathbb {R} ^{m})} , we call f η Ω k ( R n ) {\displaystyle f^{*}\eta \in \Omega ^{k}(\mathbb {R} ^{n})} the pullback of η {\displaystyle \eta } by f {\displaystyle f} and define it as the k {\displaystyle k} -form such that

( f η ) p ( v 1 p , , v k p ) := η f ( p ) ( f ( v 1 p ) , , f ( v k p ) ) , {\displaystyle (f^{*}\eta )_{p}(v_{1p},\ldots ,v_{kp}):=\eta _{f(p)}(f_{*}(v_{1p}),\ldots ,f_{*}(v_{kp})),}

for v 1 p , , v k p R p n {\displaystyle v_{1p},\ldots ,v_{kp}\in \mathbb {R} _{p}^{n}} , where f : R p n R f ( p ) m {\displaystyle f_{*}:\mathbb {R} _{p}^{n}\to \mathbb {R} _{f(p)}^{m}} is the map v p ( D f | p ( v ) ) f ( p ) {\displaystyle v_{p}\mapsto (Df|_{p}(v))_{f(p)}} .

If ω = f d x 1 d x n {\displaystyle \omega =f\,dx^{1}\wedge \cdots \wedge dx^{n}} is an n {\displaystyle n} -form on R n {\displaystyle \mathbb {R} ^{n}} (i.e., ω Ω n ( R n ) {\displaystyle \omega \in \Omega ^{n}(\mathbb {R} ^{n})} ), we define its integral over the unit n {\displaystyle n} -cell as the iterated Riemann integral of f {\displaystyle f} :

[ 0 , 1 ] n ω = [ 0 , 1 ] n f d x 1 d x n := 0 1 0 1 f d x 1 d x n . {\displaystyle \int _{[0,1]^{n}}\omega =\int _{[0,1]^{n}}f\,dx^{1}\wedge \cdots \wedge dx^{n}:=\int _{0}^{1}\cdots \int _{0}^{1}f\,dx^{1}\cdots dx^{n}.}

Next, we consider a domain of integration parameterized by a differentiable function c : [ 0 , 1 ] n A R m {\displaystyle c:[0,1]^{n}\to A\subset \mathbb {R} ^{m}} , known as an n-cube. To define the integral of ω Ω n ( A ) {\displaystyle \omega \in \Omega ^{n}(A)} over c {\displaystyle c} , we "pull back" from A {\displaystyle A} to the unit n-cell:

c ω := [ 0 , 1 ] n c ω . {\displaystyle \int _{c}\omega :=\int _{[0,1]^{n}}c^{*}\omega .}

To integrate over more general domains, we define an n {\displaystyle {\boldsymbol {n}}} -chain C = i n i c i {\textstyle C=\sum _{i}n_{i}c_{i}} as the formal sum of n {\displaystyle n} -cubes and set

C ω := i n i c i ω . {\displaystyle \int _{C}\omega :=\sum _{i}n_{i}\int _{c_{i}}\omega .}

An appropriate definition of the ( n 1 ) {\displaystyle (n-1)} -chain C {\displaystyle \partial C} , known as the boundary of C {\displaystyle C} ,[8] allows us to state the celebrated Stokes' theorem (Stokes–Cartan theorem) for chains in a subset of R m {\displaystyle \mathbb {R} ^{m}} :

If ω {\displaystyle \omega } is a smooth ( n 1 ) {\displaystyle (n-1)} -form on an open set A R m {\displaystyle A\subset \mathbb {R} ^{m}} and C {\displaystyle C} is a smooth n {\displaystyle n} -chain in A {\displaystyle A} , then C d ω = C ω {\displaystyle \int _{C}d\omega =\int _{\partial C}\omega } .

Using more sophisticated machinery (e.g., germs and derivations), the tangent space T p M {\displaystyle T_{p}M} of any smooth manifold M {\displaystyle M} (not necessarily embedded in R m {\displaystyle \mathbb {R} ^{m}} ) can be defined. Analogously, a differential form ω Ω k ( M ) {\displaystyle \omega \in \Omega ^{k}(M)} on a general smooth manifold is a map ω : p M ω p A k ( T p M ) {\displaystyle \omega :p\in M\mapsto \omega _{p}\in {\mathcal {A}}^{k}(T_{p}M)} . Stokes' theorem can be further generalized to arbitrary smooth manifolds-with-boundary and even certain "rough" domains (see the article on Stokes' theorem for details).

See also

References

  1. ^ Weisstein, Eric W. "Multilinear Form". MathWorld.
  2. ^ Many authors use the opposite convention, writing T k ( V ) {\displaystyle {\mathcal {T}}^{k}(V)} to denote the contravariant k-tensors on V {\displaystyle V} and T k ( V ) {\displaystyle {\mathcal {T}}_{k}(V)} to denote the covariant k-tensors on V {\displaystyle V} .
  3. ^ a b c Tu, Loring W. (2011). An Introduction to Manifolds (2nd ed.). Springer. pp. 22–23. ISBN 978-1-4419-7399-3.
  4. ^ Halmos, Paul R. (1958). Finite-Dimensional Vector Spaces (2nd ed.). Van Nostrand. p. 50. ISBN 0-387-90093-4.
  5. ^ Spivak uses Ω k ( V ) {\displaystyle \Omega ^{k}(V)} for the space of k {\displaystyle k} -covectors on V {\displaystyle V} . However, this notation is more commonly reserved for the space of differential k {\displaystyle k} -forms on V {\displaystyle V} . In this article, we use Ω k ( V ) {\displaystyle \Omega ^{k}(V)} to mean the latter.
  6. ^ Spivak, Michael (1965). Calculus on Manifolds. W. A. Benjamin, Inc. pp. 75–146. ISBN 0805390219.
  7. ^ The Kronecker delta is usually denoted by δ i j = δ ( i , j ) {\displaystyle \delta _{ij}=\delta (i,j)} and defined as δ : X × X { 0 , 1 } ,   ( i , j ) { 1 , i = j 0 , i j {\textstyle \delta :X\times X\to \{0,1\},\ (i,j)\mapsto {\begin{cases}1,&i=j\\0,&i\neq j\end{cases}}} . Here, the notation δ j i {\displaystyle \delta _{j}^{i}} is used to conform to the tensor calculus convention on the use of upper and lower indices.
  8. ^ The formal definition of the boundary of a chain is somewhat involved and is omitted here (see Spivak 1965, pp. 98–99 for a discussion). Intuitively, if C {\displaystyle C} maps to a square, then C {\displaystyle \partial C} is a linear combination of functions that maps to its edges in a counterclockwise manner. The boundary of a chain is distinct from the notion of a boundary in point-set topology.