The directional derivative of a scalar function f ( x ) = f ( x 1 , x 2 , … , x n ) {\displaystyle f(\mathbf {x} )=f(x_{1},x_{2},\ldots ,x_{n})} along a vector v = ( v 1 , … , v n ) {\displaystyle \mathbf {v} =(v_{1},\ldots ,v_{n})} is the function ∇ v f {\displaystyle \nabla _{\mathbf {v} }{f}} defined by the limit1 ∇ v f ( x ) = lim h → 0 f ( x + h v ) − f ( x ) h . {\displaystyle \nabla _{\mathbf {v} }{f}(\mathbf {x} )=\lim _{h\to 0}{\frac {f(\mathbf {x} +h\mathbf {v} )-f(\mathbf {x} )}{h}}.}
This definition is valid in a broad range of contexts, for example where the norm of a vector (and hence a unit vector) is undefined.2
If the function f is differentiable at x, then the directional derivative exists along any unit vector v at x, and one has
∇ v f ( x ) = ∇ f ( x ) ⋅ v {\displaystyle \nabla _{\mathbf {v} }{f}(\mathbf {x} )=\nabla f(\mathbf {x} )\cdot \mathbf {v} }
where the ∇ {\displaystyle \nabla } on the right denotes the gradient, ⋅ {\displaystyle \cdot } is the dot product and v is a unit vector.3 This follows from defining a path h ( t ) = x + t v {\displaystyle h(t)=x+tv} and using the definition of the derivative as a limit which can be calculated along this path to get: 0 = lim t → 0 f ( x + t v ) − f ( x ) − t D f ( x ) ( v ) t = lim t → 0 f ( x + t v ) − f ( x ) t − D f ( x ) ( v ) = ∇ v f ( x ) − D f ( x ) ( v ) . {\displaystyle {\begin{aligned}0&=\lim _{t\to 0}{\frac {f(x+tv)-f(x)-tDf(x)(v)}{t}}\\&=\lim _{t\to 0}{\frac {f(x+tv)-f(x)}{t}}-Df(x)(v)\\&=\nabla _{v}f(x)-Df(x)(v).\end{aligned}}}
Intuitively, the directional derivative of f at a point x represents the rate of change of f, in the direction of v.
In a Euclidean space, some authors4 define the directional derivative to be with respect to an arbitrary nonzero vector v after normalization, thus being independent of its magnitude and depending only on its direction.5
This definition gives the rate of increase of f per unit of distance moved in the direction given by v. In this case, one has ∇ v f ( x ) = lim h → 0 f ( x + h v ) − f ( x ) h | v | , {\displaystyle \nabla _{\mathbf {v} }{f}(\mathbf {x} )=\lim _{h\to 0}{\frac {f(\mathbf {x} +h\mathbf {v} )-f(\mathbf {x} )}{h|\mathbf {v} |}},} or in case f is differentiable at x, ∇ v f ( x ) = ∇ f ( x ) ⋅ v | v | . {\displaystyle \nabla _{\mathbf {v} }{f}(\mathbf {x} )=\nabla f(\mathbf {x} )\cdot {\frac {\mathbf {v} }{|\mathbf {v} |}}.}
In the context of a function on a Euclidean space, some texts restrict the vector v to being a unit vector. With this restriction, both the above definitions are equivalent.6
Many of the familiar properties of the ordinary derivative hold for the directional derivative. These include, for any functions f and g defined in a neighborhood of, and differentiable at, p:
See also: Tangent space § Tangent vectors as directional derivatives
Let M be a differentiable manifold and p a point of M. Suppose that f is a function defined in a neighborhood of p, and differentiable at p. If v is a tangent vector to M at p, then the directional derivative of f along v, denoted variously as df(v) (see Exterior derivative), ∇ v f ( p ) {\displaystyle \nabla _{\mathbf {v} }f(\mathbf {p} )} (see Covariant derivative), L v f ( p ) {\displaystyle L_{\mathbf {v} }f(\mathbf {p} )} (see Lie derivative), or v p ( f ) {\displaystyle {\mathbf {v} }_{\mathbf {p} }(f)} (see Tangent space § Definition via derivations), can be defined as follows. Let γ : [−1, 1] → M be a differentiable curve with γ(0) = p and γ′(0) = v. Then the directional derivative is defined by ∇ v f ( p ) = d d τ f ∘ γ ( τ ) | τ = 0 . {\displaystyle \nabla _{\mathbf {v} }f(\mathbf {p} )=\left.{\frac {d}{d\tau }}f\circ \gamma (\tau )\right|_{\tau =0}.} This definition can be proven independent of the choice of γ, provided γ is selected in the prescribed manner so that γ(0) = p and γ′(0) = v.
The Lie derivative of a vector field W μ ( x ) {\displaystyle W^{\mu }(x)} along a vector field V μ ( x ) {\displaystyle V^{\mu }(x)} is given by the difference of two directional derivatives (with vanishing torsion): L V W μ = ( V ⋅ ∇ ) W μ − ( W ⋅ ∇ ) V μ . {\displaystyle {\mathcal {L}}_{V}W^{\mu }=(V\cdot \nabla )W^{\mu }-(W\cdot \nabla )V^{\mu }.} In particular, for a scalar field ϕ ( x ) {\displaystyle \phi (x)} , the Lie derivative reduces to the standard directional derivative: L V ϕ = ( V ⋅ ∇ ) ϕ . {\displaystyle {\mathcal {L}}_{V}\phi =(V\cdot \nabla )\phi .}
Directional derivatives are often used in introductory derivations of the Riemann curvature tensor. Consider a curved rectangle with an infinitesimal vector δ {\displaystyle \delta } along one edge and δ ′ {\displaystyle \delta '} along the other. We translate a covector S {\displaystyle S} along δ {\displaystyle \delta } then δ ′ {\displaystyle \delta '} and then subtract the translation along δ ′ {\displaystyle \delta '} and then δ {\displaystyle \delta } . Instead of building the directional derivative using partial derivatives, we use the covariant derivative. The translation operator for δ {\displaystyle \delta } is thus 1 + ∑ ν δ ν D ν = 1 + δ ⋅ D , {\displaystyle 1+\sum _{\nu }\delta ^{\nu }D_{\nu }=1+\delta \cdot D,} and for δ ′ {\displaystyle \delta '} , 1 + ∑ μ δ ′ μ D μ = 1 + δ ′ ⋅ D . {\displaystyle 1+\sum _{\mu }\delta '^{\mu }D_{\mu }=1+\delta '\cdot D.} The difference between the two paths is then ( 1 + δ ′ ⋅ D ) ( 1 + δ ⋅ D ) S ρ − ( 1 + δ ⋅ D ) ( 1 + δ ′ ⋅ D ) S ρ = ∑ μ , ν δ ′ μ δ ν [ D μ , D ν ] S ρ . {\displaystyle (1+\delta '\cdot D)(1+\delta \cdot D)S^{\rho }-(1+\delta \cdot D)(1+\delta '\cdot D)S^{\rho }=\sum _{\mu ,\nu }\delta '^{\mu }\delta ^{\nu }[D_{\mu },D_{\nu }]S_{\rho }.} It can be argued7 that the noncommutativity of the covariant derivatives measures the curvature of the manifold: [ D μ , D ν ] S ρ = ± ∑ σ R σ ρ μ ν S σ , {\displaystyle [D_{\mu },D_{\nu }]S_{\rho }=\pm \sum _{\sigma }R^{\sigma }{}_{\rho \mu \nu }S_{\sigma },} where R {\displaystyle R} is the Riemann curvature tensor and the sign depends on the sign convention of the author.
In the Poincaré algebra, we can define an infinitesimal translation operator P as P = i ∇ . {\displaystyle \mathbf {P} =i\nabla .} (the i ensures that P is a self-adjoint operator) For a finite displacement λ, the unitary Hilbert space representation for translations is8 U ( λ ) = exp ( − i λ ⋅ P ) . {\displaystyle U({\boldsymbol {\lambda }})=\exp \left(-i{\boldsymbol {\lambda }}\cdot \mathbf {P} \right).} By using the above definition of the infinitesimal translation operator, we see that the finite translation operator is an exponentiated directional derivative: U ( λ ) = exp ( λ ⋅ ∇ ) . {\displaystyle U({\boldsymbol {\lambda }})=\exp \left({\boldsymbol {\lambda }}\cdot \nabla \right).} This is a translation operator in the sense that it acts on multivariable functions f(x) as U ( λ ) f ( x ) = exp ( λ ⋅ ∇ ) f ( x ) = f ( x + λ ) . {\displaystyle U({\boldsymbol {\lambda }})f(\mathbf {x} )=\exp \left({\boldsymbol {\lambda }}\cdot \nabla \right)f(\mathbf {x} )=f(\mathbf {x} +{\boldsymbol {\lambda }}).}
In standard single-variable calculus, the derivative of a smooth function f(x) is defined by (for small ε) d f d x = f ( x + ε ) − f ( x ) ε . {\displaystyle {\frac {df}{dx}}={\frac {f(x+\varepsilon )-f(x)}{\varepsilon }}.} This can be rearranged to find f(x+ε): f ( x + ε ) = f ( x ) + ε d f d x = ( 1 + ε d d x ) f ( x ) . {\displaystyle f(x+\varepsilon )=f(x)+\varepsilon \,{\frac {df}{dx}}=\left(1+\varepsilon \,{\frac {d}{dx}}\right)f(x).} It follows that [ 1 + ε ( d / d x ) ] {\displaystyle [1+\varepsilon \,(d/dx)]} is a translation operator. This is instantly generalized9 to multivariable functions f(x) f ( x + ε ) = ( 1 + ε ⋅ ∇ ) f ( x ) . {\displaystyle f(\mathbf {x} +{\boldsymbol {\varepsilon }})=\left(1+{\boldsymbol {\varepsilon }}\cdot \nabla \right)f(\mathbf {x} ).} Here ε ⋅ ∇ {\displaystyle {\boldsymbol {\varepsilon }}\cdot \nabla } is the directional derivative along the infinitesimal displacement ε. We have found the infinitesimal version of the translation operator: U ( ε ) = 1 + ε ⋅ ∇ . {\displaystyle U({\boldsymbol {\varepsilon }})=1+{\boldsymbol {\varepsilon }}\cdot \nabla .} It is evident that the group multiplication law10 U(g)U(f)=U(gf) takes the form U ( a ) U ( b ) = U ( a + b ) . {\displaystyle U(\mathbf {a} )U(\mathbf {b} )=U(\mathbf {a+b} ).} So suppose that we take the finite displacement λ and divide it into N parts (N→∞ is implied everywhere), so that λ/N=ε. In other words, λ = N ε . {\displaystyle {\boldsymbol {\lambda }}=N{\boldsymbol {\varepsilon }}.} Then by applying U(ε) N times, we can construct U(λ): [ U ( ε ) ] N = U ( N ε ) = U ( λ ) . {\displaystyle [U({\boldsymbol {\varepsilon }})]^{N}=U(N{\boldsymbol {\varepsilon }})=U({\boldsymbol {\lambda }}).} We can now plug in our above expression for U(ε): [ U ( ε ) ] N = [ 1 + ε ⋅ ∇ ] N = [ 1 + λ ⋅ ∇ N ] N . {\displaystyle [U({\boldsymbol {\varepsilon }})]^{N}=\left[1+{\boldsymbol {\varepsilon }}\cdot \nabla \right]^{N}=\left[1+{\frac {{\boldsymbol {\lambda }}\cdot \nabla }{N}}\right]^{N}.} Using the identity11 exp ( x ) = [ 1 + x N ] N , {\displaystyle \exp(x)=\left[1+{\frac {x}{N}}\right]^{N},} we have U ( λ ) = exp ( λ ⋅ ∇ ) . {\displaystyle U({\boldsymbol {\lambda }})=\exp \left({\boldsymbol {\lambda }}\cdot \nabla \right).} And since U(ε)f(x) = f(x+ε) we have [ U ( ε ) ] N f ( x ) = f ( x + N ε ) = f ( x + λ ) = U ( λ ) f ( x ) = exp ( λ ⋅ ∇ ) f ( x ) , {\displaystyle [U({\boldsymbol {\varepsilon }})]^{N}f(\mathbf {x} )=f(\mathbf {x} +N{\boldsymbol {\varepsilon }})=f(\mathbf {x} +{\boldsymbol {\lambda }})=U({\boldsymbol {\lambda }})f(\mathbf {x} )=\exp \left({\boldsymbol {\lambda }}\cdot \nabla \right)f(\mathbf {x} ),} Q.E.D.
As a technical note, this procedure is only possible because the translation group forms an Abelian subgroup (Cartan subalgebra) in the Poincaré algebra. In particular, the group multiplication law U(a)U(b) = U(a+b) should not be taken for granted. We also note that Poincaré is a connected Lie group. It is a group of transformations T(ξ) that are described by a continuous set of real parameters ξ a {\displaystyle \xi ^{a}} . The group multiplication law takes the form T ( ξ ¯ ) T ( ξ ) = T ( f ( ξ ¯ , ξ ) ) . {\displaystyle T({\bar {\xi }})T(\xi )=T(f({\bar {\xi }},\xi )).} Taking ξ a = 0 {\displaystyle \xi ^{a}=0} as the coordinates of the identity, we must have f a ( ξ , 0 ) = f a ( 0 , ξ ) = ξ a . {\displaystyle f^{a}(\xi ,0)=f^{a}(0,\xi )=\xi ^{a}.} The actual operators on the Hilbert space are represented by unitary operators U(T(ξ)). In the above notation we suppressed the T; we now write U(λ) as U(P(λ)). For a small neighborhood around the identity, the power series representation U ( T ( ξ ) ) = 1 + i ∑ a ξ a t a + 1 2 ∑ b , c ξ b ξ c t b c + ⋯ {\displaystyle U(T(\xi ))=1+i\sum _{a}\xi ^{a}t_{a}+{\frac {1}{2}}\sum _{b,c}\xi ^{b}\xi ^{c}t_{bc}+\cdots } is quite good. Suppose that U(T(ξ)) form a non-projective representation, i.e., U ( T ( ξ ¯ ) ) U ( T ( ξ ) ) = U ( T ( f ( ξ ¯ , ξ ) ) ) . {\displaystyle U(T({\bar {\xi }}))U(T(\xi ))=U(T(f({\bar {\xi }},\xi ))).} The expansion of f to second power is f a ( ξ ¯ , ξ ) = ξ a + ξ ¯ a + ∑ b , c f a b c ξ ¯ b ξ c . {\displaystyle f^{a}({\bar {\xi }},\xi )=\xi ^{a}+{\bar {\xi }}^{a}+\sum _{b,c}f^{abc}{\bar {\xi }}^{b}\xi ^{c}.} After expanding the representation multiplication equation and equating coefficients, we have the nontrivial condition t b c = − t b t c − i ∑ a f a b c t a . {\displaystyle t_{bc}=-t_{b}t_{c}-i\sum _{a}f^{abc}t_{a}.} Since t a b {\displaystyle t_{ab}} is by definition symmetric in its indices, we have the standard Lie algebra commutator: [ t b , t c ] = i ∑ a ( − f a b c + f a c b ) t a = i ∑ a C a b c t a , {\displaystyle [t_{b},t_{c}]=i\sum _{a}(-f^{abc}+f^{acb})t_{a}=i\sum _{a}C^{abc}t_{a},} with C the structure constant. The generators for translations are partial derivative operators, which commute: [ ∂ ∂ x b , ∂ ∂ x c ] = 0. {\displaystyle \left[{\frac {\partial }{\partial x^{b}}},{\frac {\partial }{\partial x^{c}}}\right]=0.} This implies that the structure constants vanish and thus the quadratic coefficients in the f expansion vanish as well. This means that f is simply additive: f abelian a ( ξ ¯ , ξ ) = ξ a + ξ ¯ a , {\displaystyle f_{\text{abelian}}^{a}({\bar {\xi }},\xi )=\xi ^{a}+{\bar {\xi }}^{a},} and thus for abelian groups, U ( T ( ξ ¯ ) ) U ( T ( ξ ) ) = U ( T ( ξ ¯ + ξ ) ) . {\displaystyle U(T({\bar {\xi }}))U(T(\xi ))=U(T({\bar {\xi }}+\xi )).} Q.E.D.
The rotation operator also contains a directional derivative. The rotation operator for an angle θ, i.e. by an amount θ = |θ| about an axis parallel to θ ^ = θ / θ {\displaystyle {\hat {\theta }}={\boldsymbol {\theta }}/\theta } is U ( R ( θ ) ) = exp ( − i θ ⋅ L ) . {\displaystyle U(R(\mathbf {\theta } ))=\exp(-i\mathbf {\theta } \cdot \mathbf {L} ).} Here L is the vector operator that generates SO(3): L = ( 0 0 0 0 0 1 0 − 1 0 ) i + ( 0 0 − 1 0 0 0 1 0 0 ) j + ( 0 1 0 − 1 0 0 0 0 0 ) k . {\displaystyle \mathbf {L} ={\begin{pmatrix}0&0&0\\0&0&1\\0&-1&0\end{pmatrix}}\mathbf {i} +{\begin{pmatrix}0&0&-1\\0&0&0\\1&0&0\end{pmatrix}}\mathbf {j} +{\begin{pmatrix}0&1&0\\-1&0&0\\0&0&0\end{pmatrix}}\mathbf {k} .} It may be shown geometrically that an infinitesimal right-handed rotation changes the position vector x by x → x − δ θ × x . {\displaystyle \mathbf {x} \rightarrow \mathbf {x} -\delta {\boldsymbol {\theta }}\times \mathbf {x} .} So we would expect under infinitesimal rotation: U ( R ( δ θ ) ) f ( x ) = f ( x − δ θ × x ) = f ( x ) − ( δ θ × x ) ⋅ ∇ f . {\displaystyle U(R(\delta {\boldsymbol {\theta }}))f(\mathbf {x} )=f(\mathbf {x} -\delta {\boldsymbol {\theta }}\times \mathbf {x} )=f(\mathbf {x} )-(\delta {\boldsymbol {\theta }}\times \mathbf {x} )\cdot \nabla f.} It follows that U ( R ( δ θ ) ) = 1 − ( δ θ × x ) ⋅ ∇ . {\displaystyle U(R(\delta \mathbf {\theta } ))=1-(\delta \mathbf {\theta } \times \mathbf {x} )\cdot \nabla .} Following the same exponentiation procedure as above, we arrive at the rotation operator in the position basis, which is an exponentiated directional derivative:12 U ( R ( θ ) ) = exp ( − ( θ × x ) ⋅ ∇ ) . {\displaystyle U(R(\mathbf {\theta } ))=\exp(-(\mathbf {\theta } \times \mathbf {x} )\cdot \nabla ).}
A normal derivative is a directional derivative taken in the direction normal (that is, orthogonal) to some surface in space, or more generally along a normal vector field orthogonal to some hypersurface. See for example Neumann boundary condition. If the normal direction is denoted by n {\displaystyle \mathbf {n} } , then the normal derivative of a function f is sometimes denoted as ∂ f ∂ n {\textstyle {\frac {\partial f}{\partial \mathbf {n} }}} . In other notations, ∂ f ∂ n = ∇ f ( x ) ⋅ n = ∇ n f ( x ) = ∂ f ∂ x ⋅ n = D f ( x ) [ n ] . {\displaystyle {\frac {\partial f}{\partial \mathbf {n} }}=\nabla f(\mathbf {x} )\cdot \mathbf {n} =\nabla _{\mathbf {n} }{f}(\mathbf {x} )={\frac {\partial f}{\partial \mathbf {x} }}\cdot \mathbf {n} =Df(\mathbf {x} )[\mathbf {n} ].}
Several important results in continuum mechanics require the derivatives of vectors with respect to vectors and of tensors with respect to vectors and tensors.13 The directional directive provides a systematic way of finding these derivatives.
This section is an excerpt from Tensor derivative (continuum mechanics) § Derivatives with respect to vectors and second-order tensors.[edit]
The definitions of directional derivatives for various situations are given below. It is assumed that the functions are sufficiently smooth that derivatives can be taken.
Let f(v) be a real valued function of the vector v. Then the derivative of f(v) with respect to v (or at v) is the vector defined through its dot product with any vector u being
∂ f ∂ v ⋅ u = D f ( v ) [ u ] = [ d d α f ( v + α u ) ] α = 0 {\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} =Df(\mathbf {v} )[\mathbf {u} ]=\left[{\frac {d}{d\alpha }}~f(\mathbf {v} +\alpha ~\mathbf {u} )\right]_{\alpha =0}}
for all vectors u. The above dot product yields a scalar, and if u is a unit vector gives the directional derivative of f at v, in the u direction.
Properties:
Let f(v) be a vector valued function of the vector v. Then the derivative of f(v) with respect to v (or at v) is the second order tensor defined through its dot product with any vector u being
∂ f ∂ v ⋅ u = D f ( v ) [ u ] = [ d d α f ( v + α u ) ] α = 0 {\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} =D\mathbf {f} (\mathbf {v} )[\mathbf {u} ]=\left[{\frac {d}{d\alpha }}~\mathbf {f} (\mathbf {v} +\alpha ~\mathbf {u} )\right]_{\alpha =0}}
for all vectors u. The above dot product yields a vector, and if u is a unit vector gives the direction derivative of f at v, in the directional u.
Let f ( S ) {\displaystyle f({\boldsymbol {S}})} be a real valued function of the second order tensor S {\displaystyle {\boldsymbol {S}}} . Then the derivative of f ( S ) {\displaystyle f({\boldsymbol {S}})} with respect to S {\displaystyle {\boldsymbol {S}}} (or at S {\displaystyle {\boldsymbol {S}}} ) in the direction T {\displaystyle {\boldsymbol {T}}} is the second order tensor defined as ∂ f ∂ S : T = D f ( S ) [ T ] = [ d d α f ( S + α T ) ] α = 0 {\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=Df({\boldsymbol {S}})[{\boldsymbol {T}}]=\left[{\frac {d}{d\alpha }}~f({\boldsymbol {S}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}} for all second order tensors T {\displaystyle {\boldsymbol {T}}} .
Let F ( S ) {\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})} be a second order tensor valued function of the second order tensor S {\displaystyle {\boldsymbol {S}}} . Then the derivative of F ( S ) {\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})} with respect to S {\displaystyle {\boldsymbol {S}}} (or at S {\displaystyle {\boldsymbol {S}}} ) in the direction T {\displaystyle {\boldsymbol {T}}} is the fourth order tensor defined as ∂ F ∂ S : T = D F ( S ) [ T ] = [ d d α F ( S + α T ) ] α = 0 {\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=D{\boldsymbol {F}}({\boldsymbol {S}})[{\boldsymbol {T}}]=\left[{\frac {d}{d\alpha }}~{\boldsymbol {F}}({\boldsymbol {S}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}} for all second order tensors T {\displaystyle {\boldsymbol {T}}} .
Media related to Directional derivative at Wikimedia Commons
R. Wrede; M.R. Spiegel (2010). Advanced Calculus (3rd ed.). Schaum's Outline Series. ISBN 978-0-07-162366-7. 978-0-07-162366-7 ↩
The applicability extends to functions over spaces without a metric and to differentiable manifolds, such as in general relativity. /wiki/Metric_(mathematics) ↩
If the dot product is undefined, the gradient is also undefined; however, for differentiable f, the directional derivative is still defined, and a similar relation exists with the exterior derivative. /wiki/Gradient ↩
Thomas, George B. Jr.; and Finney, Ross L. (1979) Calculus and Analytic Geometry, Addison-Wesley Publ. Co., fifth edition, p. 593. ↩
This typically assumes a Euclidean space – for example, a function of several variables typically has no definition of the magnitude of a vector, and hence of a unit vector. /wiki/Euclidean_space ↩
Hughes Hallett, Deborah; McCallum, William G.; Gleason, Andrew M. (2012-01-01). Calculus : Single and multivariable. John wiley. p. 780. ISBN 9780470888612. OCLC 828768012. 9780470888612 ↩
Zee, A. (2013). Einstein gravity in a nutshell. Princeton: Princeton University Press. p. 341. ISBN 9780691145587. 9780691145587 ↩
Weinberg, Steven (1999). The quantum theory of fields (Reprinted (with corr.). ed.). Cambridge [u.a.]: Cambridge Univ. Press. ISBN 9780521550017. 9780521550017 ↩
Zee, A. (2013). Einstein gravity in a nutshell. Princeton: Princeton University Press. ISBN 9780691145587. 9780691145587 ↩
Cahill, Kevin Cahill (2013). Physical mathematics (Repr. ed.). Cambridge: Cambridge University Press. ISBN 978-1107005211. 978-1107005211 ↩
Larson, Ron; Edwards, Bruce H. (2010). Calculus of a single variable (9th ed.). Belmont: Brooks/Cole. ISBN 9780547209982. 9780547209982 ↩
Shankar, R. (1994). Principles of quantum mechanics (2nd ed.). New York: Kluwer Academic / Plenum. p. 318. ISBN 9780306447907. 9780306447907 ↩
J. E. Marsden and T. J. R. Hughes, 2000, Mathematical Foundations of Elasticity, Dover. ↩