Whereas standard Riemann integration sums a function f(x) over a continuous range of values of x, functional integration sums a functional G[f], which can be thought of as a "function of a function" over a continuous range (or space) of functions f. Most functional integrals cannot be evaluated exactly but must be evaluated using perturbation methods. The formal definition of a functional integral is ∫ G [ f ] D [ f ] ≡ ∫ R ⋯ ∫ R G [ f ] ∏ x d f ( x ) . {\displaystyle \int G[f]\;{\mathcal {D}}[f]\equiv \int _{\mathbb {R} }\cdots \int _{\mathbb {R} }G[f]\prod _{x}df(x)\;.}
However, in most cases the functions f(x) can be written in terms of an infinite series of orthogonal functions such as f ( x ) = f n H n ( x ) {\displaystyle f(x)=f_{n}H_{n}(x)} , and then the definition becomes ∫ G [ f ] D [ f ] ≡ ∫ R ⋯ ∫ R G ( f 1 ; f 2 ; … ) ∏ n d f n , {\displaystyle \int G[f]\;{\mathcal {D}}[f]\equiv \int _{\mathbb {R} }\cdots \int _{\mathbb {R} }G(f_{1};f_{2};\ldots )\prod _{n}df_{n}\;,}
which is slightly more understandable. The integral is shown to be a functional integral with a capital D {\displaystyle {\mathcal {D}}} . Sometimes the argument is written in square brackets D [ f ] {\displaystyle {\mathcal {D}}[f]} , to indicate the functional dependence of the function in the functional integration measure.
Most functional integrals are actually infinite, but often the limit of the quotient of two related functional integrals can still be finite. The functional integrals that can be evaluated exactly usually start with the following Gaussian integral:
in which K ( x ; y ) = K ( y ; x ) {\displaystyle K(x;y)=K(y;x)} . By functionally differentiating this with respect to J(x) and then setting to 0 this becomes an exponential multiplied by a monomial in f. To see this, let's use the following notation:
G [ f , J ] = − 1 2 ∫ R [ ∫ R f ( x ) K ( x ; y ) f ( y ) d y + J ( x ) f ( x ) ] d x , W [ J ] = ∫ exp { G [ f , J ] } D [ f ] . {\displaystyle G[f,J]=-{\frac {1}{2}}\int _{\mathbb {R} }\left[\int _{\mathbb {R} }f(x)K(x;y)f(y)\,dy+J(x)f(x)\right]dx\,\quad ,\quad W[J]=\int \exp \lbrace G[f,J]\rbrace {\mathcal {D}}[f]\;.}
With this notation the first equation can be written as:
W [ J ] W [ 0 ] = exp { 1 2 ∫ R 2 J ( x ) K − 1 ( x ; y ) J ( y ) d x d y } . {\displaystyle {\dfrac {W[J]}{W[0]}}=\exp \left\lbrace {\frac {1}{2}}\int _{\mathbb {R} ^{2}}J(x)K^{-1}(x;y)J(y)\,dx\,dy\right\rbrace .}
Now, taking functional derivatives to the definition of W [ J ] {\displaystyle W[J]} and then evaluating in J = 0 {\displaystyle J=0} , one obtains:
δ δ J ( a ) W [ J ] | J = 0 = ∫ f ( a ) exp { G [ f , 0 ] } D [ f ] , {\displaystyle {\dfrac {\delta }{\delta J(a)}}W[J]{\Bigg |}_{J=0}=\int f(a)\exp \lbrace G[f,0]\rbrace {\mathcal {D}}[f]\;,}
δ 2 W [ J ] δ J ( a ) δ J ( b ) | J = 0 = ∫ f ( a ) f ( b ) exp { G [ f , 0 ] } D [ f ] , {\displaystyle {\dfrac {\delta ^{2}W[J]}{\delta J(a)\delta J(b)}}{\Bigg |}_{J=0}=\int f(a)f(b)\exp \lbrace G[f,0]\rbrace {\mathcal {D}}[f]\;,}
⋮ {\displaystyle \qquad \qquad \qquad \qquad \vdots }
which is the result anticipated. More over, by using the first equation one arrives to the useful result:
Putting these results together and backing to the original notation we have:
∫ f ( a ) f ( b ) exp { − 1 2 ∫ R 2 f ( x ) K ( x ; y ) f ( y ) d x d y } D [ f ] ∫ exp { − 1 2 ∫ R 2 f ( x ) K ( x ; y ) f ( y ) d x d y } D [ f ] = K − 1 ( a ; b ) . {\displaystyle {\frac {\displaystyle \int f(a)f(b)\exp \left\lbrace -{\frac {1}{2}}\int _{\mathbb {R} ^{2}}f(x)K(x;y)f(y)\,dx\,dy\right\rbrace {\mathcal {D}}[f]}{\displaystyle \int \exp \left\lbrace -{\frac {1}{2}}\int _{\mathbb {R} ^{2}}f(x)K(x;y)f(y)\,dx\,dy\right\rbrace {\mathcal {D}}[f]}}=K^{-1}(a;b)\,.}
Another useful integral is the functional delta function:
which is useful to specify constraints. Functional integrals can also be done over Grassmann-valued functions ψ ( x ) {\displaystyle \psi (x)} , where ψ ( x ) ψ ( y ) = − ψ ( y ) ψ ( x ) {\displaystyle \psi (x)\psi (y)=-\psi (y)\psi (x)} , which is useful in quantum electrodynamics for calculations involving fermions.
Functional integrals where the space of integration consists of paths (ν = 1) can be defined in many different ways. The definitions fall in two different classes: the constructions derived from Wiener's theory yield an integral based on a measure, whereas the constructions following Feynman's path integral do not. Even within these two broad divisions, the integrals are not identical, that is, they are defined differently for different classes of functions.
In the Wiener integral, a probability is assigned to a class of Brownian motion paths. The class consists of the paths w that are known to go through a small region of space at a given time. The passage through different regions of space is assumed independent of each other, and the distance between any two points of the Brownian path is assumed to be Gaussian-distributed with a variance that depends on the time t and on a diffusion constant D:
The probability for the class of paths can be found by multiplying the probabilities of starting in one region and then being at the next. The Wiener measure can be developed by considering the limit of many small regions.
Daniell, P. J. (July 1919). "Integrals in An Infinite Number of Dimensions". The Annals of Mathematics. Second Series. 20 (4): 281–288. doi:10.2307/1967122. JSTOR 1967122. /wiki/Doi_(identifier) ↩