In directional statistics, the projected normal distribution (also known as offset normal distribution, angular normal distribution or angular Gaussian distribution) is a probability distribution over directions that describes the radial projection of a random variable with n-variate normal distribution over the unit (n-1)-sphere.
Definition and properties
Given a random variable X ∈ R n {\displaystyle {\boldsymbol {X}}\in \mathbb {R} ^{n}} that follows a multivariate normal distribution N n ( μ , Σ ) {\displaystyle {\mathcal {N}}_{n}({\boldsymbol {\mu }},\,{\boldsymbol {\Sigma }})} , the projected normal distribution P N n ( μ , Σ ) {\displaystyle {\mathcal {PN}}_{n}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} represents the distribution of the random variable Y = X ‖ X ‖ {\displaystyle {\boldsymbol {Y}}={\frac {\boldsymbol {X}}{\lVert {\boldsymbol {X}}\rVert }}} obtained projecting X {\displaystyle {\boldsymbol {X}}} over the unit sphere. In the general case, the projected normal distribution can be asymmetric and multimodal. In case μ {\displaystyle {\boldsymbol {\mu }}} is parallel to an eigenvector of Σ {\displaystyle {\boldsymbol {\Sigma }}} , the distribution is symmetric.3 The first version of such distribution was introduced in Pukkila and Rao (1988).4
Support
The support of this distribution is the unit (n-1)-sphere, which can be variously given in terms of a set of ( n − 1 ) {\displaystyle (n-1)} -dimensional angular spherical cooordinates:
Θ = [ 0 , π ] n − 2 × [ 0 , 2 π ) ⊂ R n − 1 {\displaystyle {\boldsymbol {\Theta }}=[0,\pi ]^{n-2}\times [0,2\pi )\subset \mathbb {R} ^{n-1}}or in terms of n {\displaystyle n} -dimensional Cartesian coordinates:
S n − 1 = { z ∈ R n : ‖ z ‖ = 1 } ⊂ R n {\displaystyle \mathbb {S} ^{n-1}=\{{\boldsymbol {z}}\in \mathbb {R} ^{n}:\lVert {\boldsymbol {z}}\rVert =1\}\subset \mathbb {R} ^{n}}The two are linked via the embedding function, e : Θ → R n {\displaystyle e:{\boldsymbol {\Theta }}\to \mathbb {R} ^{n}} , with range e ( Θ ) = S n − 1 . {\displaystyle e({\boldsymbol {\Theta }})=\mathbb {S} ^{n-1}.} This function is defined by the formula for spherical coordinates at r = 1. {\displaystyle r=1.}
Density function
The density of the projected normal distribution P N n ( μ , Σ ) {\displaystyle {\mathcal {PN}}_{n}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} can be constructed from the density of its generator n-variate normal distribution N n ( μ , Σ ) {\displaystyle {\mathcal {N}}_{n}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} by re-parametrising to n-dimensional spherical coordinates and then integrating over the radial coordinate.
In full spherical coordinates with radial component r ∈ [ 0 , ∞ ) {\displaystyle r\in [0,\infty )} and angles θ = ( θ 1 , … , θ n − 1 ) ∈ Θ {\displaystyle {\boldsymbol {\theta }}=(\theta _{1},\dots ,\theta _{n-1})\in {\boldsymbol {\Theta }}} , a point x = ( x 1 , … , x n ) ∈ R n {\displaystyle {\boldsymbol {x}}=(x_{1},\dots ,x_{n})\in \mathbb {R} ^{n}} can be written as x = r v {\displaystyle {\boldsymbol {x}}=r{\boldsymbol {v}}} , with v ∈ S n − 1 {\displaystyle {\boldsymbol {v}}\in \mathbb {S} ^{n-1}} . To be clear, v = e ( θ ) {\displaystyle {\boldsymbol {v}}=e({\boldsymbol {\theta }})} , as given by the above-defined embedding function. The joint density becomes
p ( r , θ | μ , Σ ) = r n − 1 N n ( r v ∣ μ , Σ ) = r n − 1 | Σ | ( 2 π ) n 2 e − 1 2 ( r v − μ ) ⊤ Σ − 1 ( r v − μ ) {\displaystyle p(r,{\boldsymbol {\theta }}|{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})=r^{n-1}{\mathcal {N}}_{n}(r{\boldsymbol {v}}\mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\frac {r^{n-1}}{{\sqrt {|{\boldsymbol {\Sigma }}|}}(2\pi )^{\frac {n}{2}}}}e^{-{\frac {1}{2}}(r{\boldsymbol {v}}-{\boldsymbol {\mu }})^{\top }\Sigma ^{-1}(r{\boldsymbol {v}}-{\boldsymbol {\mu }})}}where the factor r n − 1 {\displaystyle r^{n-1}} is due to the change of variables x = r v {\displaystyle {\boldsymbol {x}}=r{\boldsymbol {v}}} . The density of P N n ( μ , Σ ) {\displaystyle {\mathcal {PN}}_{n}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} can then be obtained via marginalization over r {\displaystyle r} as5
p ( θ | μ , Σ ) = ∫ 0 ∞ p ( r , θ | μ , Σ ) d r . {\displaystyle p({\boldsymbol {\theta }}|{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})=\int _{0}^{\infty }p(r,{\boldsymbol {\theta }}|{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})dr.}The same density had been previously obtained in Pukkila and Rao (1988, Eq. (2.4))6 using a different notation.
Note on density definition
This subsection gives some clarification lest the various forms of probability densities used in this article be misunderstood. Take for example a random variate u ∈ ( 0 , 1 ] {\displaystyle u\in (0,1]} , with uniform density, p U ( u ) = 1 {\displaystyle p_{U}(u)=1} . If ℓ = − log u {\displaystyle \ell =-\log u} , it has density, p L ( ℓ ) = e − ℓ {\displaystyle p_{L}(\ell )=e^{-\ell }} . Both densities are defined w.r.t. Lebesgue measure on the real line. The tacit, default assumptions that are usually made when specifying density functions are: The density is w.r.t. Lebesgue measure applied in the space where the argument of the density function lives; and therefore that the densities involved in a change of variables are related by a factor dependent on the derivative(s) of the transformation ( d ℓ / d u = e − ℓ {\displaystyle d\ell /du=e^{-\ell }} in this example; and r n − 1 {\displaystyle r^{n-1}} for the above change of variables, x = r v {\displaystyle {\boldsymbol {x}}=r{\boldsymbol {v}}} ). Neither of these assumptions apply to the P N n {\displaystyle {\mathcal {PN_{n}}}} densities in this article:
- For n ≥ 3 {\displaystyle n\geq 3} the density p ( θ ∣ μ , Σ ) {\displaystyle p({\boldsymbol {\theta }}\mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} is not defined w.r.t. Lebesgue measure in R n − 1 {\displaystyle \mathbb {R} ^{n-1}} , where θ {\displaystyle {\boldsymbol {\theta }}} lives, because that measure does not agree with the standard notion of hyperspherical area. Instead, the density is defined w.r.t. a measure that is pulled back via the embedding function from Lebesgue measure in the ( n − 1 ) {\displaystyle (n-1)} -dimensional tangent space of the hypersphere. This will be explained below.
- With the embedding v = e ( θ ) {\displaystyle {\boldsymbol {v}}=e({\boldsymbol {\theta }})} , a density, p ~ ( v ∣ μ , Σ ) {\displaystyle {\tilde {p}}({\boldsymbol {v}}\mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} cannot be defined w.r.t. Lebesgue measure, because S n − 1 ∈ R n {\displaystyle \mathbb {S} ^{n-1}\in \mathbb {R} ^{n}} has Lebesgue measure zero. Instead, p ~ {\displaystyle {\tilde {p}}} is defined w.r.t. scaled Hausdorff measure.
The pullback and Hausdorff measures agree, so that:
p ( θ ∣ μ , Σ ) = p ~ ( v ∣ μ , Σ ) {\displaystyle p({\boldsymbol {\theta }}\mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\tilde {p}}({\boldsymbol {v}}\mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }})}where there is no change-of-variables factor, because the densities use different measures.
To better understand what is meant by a density being defined w.r.t. a measure (a function that maps subsets in sample space to a non-negative real-valued 'volume'), consider a measureable subset, U ⊆ Θ {\displaystyle U\subseteq {\boldsymbol {\Theta }}} , with embedded image V = e ( U ) ⊆ S n − 1 {\displaystyle V=e(U)\subseteq \mathbb {S} ^{n-1}} and let v = e ( θ ) ∼ P N n {\displaystyle {\boldsymbol {v}}=e({\boldsymbol {\theta }})\sim {\mathcal {PN_{n}}}} , then the probability for finding the sample in the subset is:
P ( θ ∈ U ) = ∫ U p d π = P ( v ∈ V ) = ∫ V p ~ d h {\displaystyle P({\boldsymbol {\theta }}\in U)=\int _{U}p\,d\pi =P({\boldsymbol {v}}\in V)=\int _{V}{\tilde {p}}\,dh}where π , h {\displaystyle \pi ,h} are respectively the pullback and Hausdorff measures; and the integrals are Lebesgue integrals, which can be rewritten as Riemann integrals thus:
∫ U p d π = ∫ 0 ∞ π ( { θ ∈ U : p ( θ ) > t } ) d t {\displaystyle \int _{U}p\,d\pi =\int _{0}^{\infty }\pi \left(\{{\boldsymbol {\theta }}\in U:p({\boldsymbol {\theta }})>t\}\right)\,dt}Pullback measure
The tangent space at v ∈ S n − 1 {\displaystyle {\boldsymbol {v}}\in \mathbb {S} ^{n-1}} is the ( n − 1 ) {\displaystyle (n-1)} -dimensional linear subspace perpendicular to v {\displaystyle {\boldsymbol {v}}} , where Lebesgue measure can be used. At very small scale, the tangent space is indistinguishable from the sphere (e.g. Earth looks locally flat), so that Lebesgue measure in tangent space agrees with area on the hypersphere. The tangent space Lebesgue measure is pulled back via the embedding function, as follows, to define the measure in coordinate space. For U ⊆ Θ , {\displaystyle U\subseteq {\boldsymbol {\Theta }},} a measureable subset in coordinate space, the pullback measure, as a Riemann integral is:
π ( U ) = ∫ U | det ( E θ ′ E θ ) | d θ 1 ⋯ d θ n − 1 {\displaystyle \pi (U)=\int _{U}{\sqrt {\left|\operatorname {det} (\mathbf {E} _{\boldsymbol {\theta }}'\mathbf {E} _{\boldsymbol {\theta }})\right|}}\,d\theta _{1}\,\cdots \,d\theta _{n-1}}where E θ {\displaystyle \mathbf {E} _{\boldsymbol {\theta }}} is the Jacobian of the embedding function e ( θ ) , {\displaystyle e({\boldsymbol {\theta }}),} the columns of which are vectors in the tangent space where the Lebesgue measure is applied. It can be shown: | det ( E θ ′ E θ ) | = ∏ i = 1 n − 2 sin n − 1 − i ( θ i ) . {\displaystyle {\sqrt {\left|\operatorname {det} (\mathbf {E} _{\boldsymbol {\theta }}'\mathbf {E} _{\boldsymbol {\theta }})\right|}}=\prod _{i=1}^{n-2}\sin ^{n-1-i}(\theta _{i}).} For this measure, we can rewrite the above Lebesgue integral, again as a Riemann integral:7
P ( θ ∈ U ) = ∫ U p d h = ∫ U p ( θ ∣ μ , Σ ) | det ( E θ ′ E θ ) | d θ 1 ⋯ d θ n − 1 {\displaystyle P({\boldsymbol {\theta }}\in {\mathcal {U}})=\int _{U}p\,dh=\int _{U}p({\boldsymbol {\theta }}\mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }})\,{\sqrt {\left|\operatorname {det} (\mathbf {E} _{\boldsymbol {\theta }}'\mathbf {E} _{\boldsymbol {\theta }})\right|}}\,d\theta _{1}\,\cdots \,d\theta _{n-1}}Finally, for better geometric understanding of the square-root factor, consider:
- For n = 2 {\displaystyle n=2} , when integrating over the unitcircle, w.r.t. θ 1 {\displaystyle \theta _{1}} , with embedding e ( θ 1 ) = ( cos θ 1 , sin θ 1 ) {\displaystyle e(\theta _{1})=(\cos \theta _{1},\sin \theta _{1})} , the Jacobian is E θ = [ − sin θ 1 cos θ 1 ] ′ {\displaystyle \mathbf {E} _{\boldsymbol {\theta }}=[-\sin \theta _{1}\,\cos \theta _{1}]'} , so that | det ( E θ ′ E θ ) | = 1 {\displaystyle {\sqrt {\left|\operatorname {det} (\mathbf {E} _{\boldsymbol {\theta }}'\mathbf {E} _{\boldsymbol {\theta }})\right|}}=1} . The angular differential, d θ 1 {\displaystyle d\theta _{1}} directly gives the associated arc length on the circle.
- For n = 3 {\displaystyle n=3} , when integrating over the unitsphere, w.r.t. θ 1 , θ 2 {\displaystyle \theta _{1},\theta _{2}} , we get | det ( E θ ′ E θ ) | = sin θ 1 {\displaystyle {\sqrt {\left|\operatorname {det} (\mathbf {E} _{\boldsymbol {\theta }}'\mathbf {E} _{\boldsymbol {\theta }})\right|}}=\sin \theta _{1}} , which is the radius of the circle of latitude at θ 1 {\displaystyle \theta _{1}} (compare equator to polar circle). The differential surface area on the sphere is: sin θ 1 d θ 1 d θ 2 {\displaystyle \sin \theta _{1}\,d\theta _{1}\,d\theta _{2}} .
- More generally, for n ≥ 2 {\displaystyle n\geq 2} , let T {\displaystyle \mathbf {T} } be a square or tall matrix, the column-vectors of which represent the edges (meeting at a common vertex) of a parallelotope, which we denote / T / {\displaystyle /\mathbf {T} \!/} . For a square matrix the parallelotope volume is given by the absolute value of its determinant, | det ( T ) | {\displaystyle \left|\operatorname {det} (\mathbf {T} )\right|} , while for the tall matrix, the volume is the square root of the Gram determinant: | det ( T ′ T ) | . {\displaystyle {\sqrt {\left|\operatorname {det} (\mathbf {T} '\mathbf {T} )\right|}}.} Let R = diag ( d θ 1 , ⋯ , d θ n − 1 ) {\displaystyle \mathbf {R} =\operatorname {diag} (d\theta _{1},\cdots ,d\theta _{n-1})} , so that / R / ∈ Θ {\displaystyle /\mathbf {R} /\in {\boldsymbol {\Theta }}} is a rectangle with infinitessimally small volume: | det ( R ) | = ∏ i = 1 n − 1 d θ i {\displaystyle \left|\operatorname {det} (\mathbf {R} )\right|=\prod _{i=1}^{n-1}d\theta _{i}} . Since the smooth embedding function is linear at small scale, the embedded image is e ( / R / ) = / E θ R / {\displaystyle e(/\mathbf {R} /)=/\mathbf {E_{\boldsymbol {\theta }}R} /} , with volume: | det ( R E θ ′ E θ R ) | = | det ( E θ ′ E θ ) | d θ 1 ⋯ d θ n − 1 . {\displaystyle {\sqrt {|\operatorname {det} (\mathbf {RE_{\boldsymbol {\theta }}} '\mathbf {E_{\boldsymbol {\theta }}R} )|}}={\sqrt {|\operatorname {det} (\mathbf {E_{\boldsymbol {\theta }}} '\mathbf {E_{\boldsymbol {\theta }}} )|}}\,d\theta _{1}\,\cdots \,d\theta _{n-1}.}
Circular distribution
For n = 2 {\displaystyle n=2} , parametrising the position on the unit circle in polar coordinates as v = ( cos θ , sin θ ) {\displaystyle {\boldsymbol {v}}=(\cos \theta ,\sin \theta )} , the density function can be written with respect to the parameters μ {\displaystyle {\boldsymbol {\mu }}} and Σ {\displaystyle {\boldsymbol {\Sigma }}} of the initial normal distribution as
p ( θ | μ , Σ ) = e − 1 2 μ ⊤ Σ − 1 μ 2 π | Σ | v ⊤ Σ − 1 v ( 1 + T ( θ ) Φ ( T ( θ ) ) ϕ ( T ( θ ) ) ) I [ 0 , 2 π ) ( θ ) {\displaystyle p(\theta |{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\frac {e^{-{\frac {1}{2}}{\boldsymbol {\mu }}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {\mu }}}}{2\pi {\sqrt {|{\boldsymbol {\Sigma }}|}}{\boldsymbol {v}}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {v}}}}\left(1+T(\theta ){\frac {\Phi (T(\theta ))}{\phi (T(\theta ))}}\right)I_{[0,2\pi )}(\theta )}where ϕ {\displaystyle \phi } and Φ {\displaystyle \Phi } are the density and cumulative distribution of a standard normal distribution, T ( θ ) = v ⊤ Σ − 1 μ v ⊤ Σ − 1 v {\displaystyle T(\theta )={\frac {{\boldsymbol {v}}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {\mu }}}{\sqrt {{\boldsymbol {v}}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {v}}}}}} , and I {\displaystyle I} is the indicator function.8
In the circular case, if the mean vector μ {\displaystyle {\boldsymbol {\mu }}} is parallel to the eigenvector associated to the largest eigenvalue of the covariance, the distribution is symmetric and has a mode at θ = α {\displaystyle \theta =\alpha } and either a mode or an antimode at θ = α + π {\displaystyle \theta =\alpha +\pi } , where α {\displaystyle \alpha } is the polar angle of μ = ( r cos α , r sin α ) {\displaystyle {\boldsymbol {\mu }}=(r\cos \alpha ,r\sin \alpha )} . If the mean is parallel to the eigenvector associated to the smallest eigenvalue instead, the distribution is also symmetric but has either a mode or an antimode at θ = α {\displaystyle \theta =\alpha } and an antimode at θ = α + π {\displaystyle \theta =\alpha +\pi } .9
Spherical distribution
For n = 3 {\displaystyle n=3} , parametrising the position on the unit sphere in spherical coordinates as v = ( cos θ 1 sin θ 2 , sin θ 1 sin θ 2 , cos θ 2 ) {\displaystyle {\boldsymbol {v}}=(\cos \theta _{1}\sin \theta _{2},\sin \theta _{1}\sin \theta _{2},\cos \theta _{2})} where θ = ( θ 1 , θ 2 ) {\displaystyle {\boldsymbol {\theta }}=(\theta _{1},\theta _{2})} are the azimuth θ 1 ∈ [ 0 , 2 π ) {\displaystyle \theta _{1}\in [0,2\pi )} and inclination θ 2 ∈ [ 0 , π ] {\displaystyle \theta _{2}\in [0,\pi ]} angles respectively, the density function becomes
p ( θ | μ , Σ ) = e − 1 2 μ ⊤ Σ − 1 μ | Σ | ( 2 π v ⊤ Σ − 1 v ) 3 2 ( Φ ( T ( θ ) ) ϕ ( T ( θ ) ) + T ( θ ) ( 1 + T ( θ ) Φ ( T ( θ ) ) ϕ ( T ( θ ) ) ) ) I [ 0 , 2 π ) ( θ 1 ) I [ 0 , π ] ( θ 2 ) {\displaystyle p({\boldsymbol {\theta }}|{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\frac {e^{-{\frac {1}{2}}{\boldsymbol {\mu }}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {\mu }}}}{{\sqrt {|{\boldsymbol {\Sigma }}|}}\left(2\pi {\boldsymbol {v}}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {v}}\right)^{\frac {3}{2}}}}\left({\frac {\Phi (T({\boldsymbol {\theta }}))}{\phi (T({\boldsymbol {\theta }}))}}+T({\boldsymbol {\theta }})\left(1+T({\boldsymbol {\theta }}){\frac {\Phi (T({\boldsymbol {\theta }}))}{\phi (T({\boldsymbol {\theta }}))}}\right)\right)I_{[0,2\pi )}(\theta _{1})I_{[0,\pi ]}(\theta _{2})}where ϕ {\displaystyle \phi } , Φ {\displaystyle \Phi } , T {\displaystyle T} , and I {\displaystyle I} have the same meaning as the circular case.10
Angular Central Gaussian Distribution
In the special case, μ = 0 {\displaystyle {\boldsymbol {\mu }}=\mathbf {0} } , the projected normal distribution, with n ≥ 2 {\displaystyle n\geq 2} is known as the angular central Gaussian (ACG)11 and in this case, the density function can be obtained in closed form as a function of Cartesian coordinates. Let x ∼ N n ( 0 , Σ ) {\displaystyle \mathbf {x} \sim {\mathcal {N}}_{n}(\mathbf {0} ,{\boldsymbol {\Sigma }})} and project radially: v = ‖ x ‖ − 1 x {\displaystyle \mathbf {v} =\lVert \mathbf {x} \rVert ^{-1}\mathbf {x} } so that v ∈ S n − 1 = { z ∈ R n : ‖ z ‖ = 1 } {\displaystyle \mathbf {v} \in \mathbb {S} ^{n-1}=\{\mathbf {z} \in \mathbb {R} ^{n}:\lVert \mathbf {z} \rVert =1\}} (the unit hypersphere). We write v ∼ ACG ( Σ ) {\displaystyle \mathbf {v} \sim \operatorname {ACG} ({\boldsymbol {\Sigma }})} , which as explained above, at v = e ( θ ) {\displaystyle {\boldsymbol {v}}=e({\boldsymbol {\theta }})} , has density:
p ~ ACG ( v ∣ Σ ) = p ( θ ∣ 0 , Σ ) = ∫ 0 ∞ r n − 1 N n ( r v ∣ 0 , Σ ) d r = Γ ( n 2 ) 2 π n 2 | Σ | − 1 2 ( v ′ Σ − 1 v ) − n 2 {\displaystyle {\tilde {p}}_{\text{ACG}}(\mathbf {v} \mid {\boldsymbol {\Sigma }})=p({\boldsymbol {\theta }}\mid {\boldsymbol {0}},{\boldsymbol {\Sigma }})=\int _{0}^{\infty }r^{n-1}{\mathcal {N}}_{n}(r\mathbf {v} \mid \mathbf {0} ,{\boldsymbol {\Sigma }})\,dr={\frac {\Gamma ({\frac {n}{2}})}{2\pi ^{\frac {n}{2}}}}\left|{\boldsymbol {\Sigma }}\right|^{-{\frac {1}{2}}}(\mathbf {v} '{\boldsymbol {\Sigma }}^{-1}\mathbf {v} )^{-{\frac {n}{2}}}}where the integral can be solved by a change of variables and then using the standard definition of the gamma function. Notice that:
- For any k > 0 {\displaystyle k>0} there is the parameter indeterminacy:
- If Σ = k I n {\displaystyle {\boldsymbol {\Sigma }}=k\mathbf {I} _{n}} , the uniform hypershpere distribution, A C G ( I n ) {\displaystyle \operatorname {ACG(\mathbf {I} _{n})} } results, with constant density equal to the reciprocal of the surface area of S n − 1 {\displaystyle \mathbb {S} ^{n-1}} :
ACG via transformation of normal or uniform variates
Let T {\displaystyle \mathbf {T} } be any n {\displaystyle n} -by- n {\displaystyle n} invertible matrix such that T T ′ = Σ {\displaystyle \mathbf {T} \mathbf {T} '={\boldsymbol {\Sigma }}} . Let u ∼ ACG ( I n ) {\displaystyle \mathbf {u} \sim \operatorname {ACG} (\mathbf {I} _{n})} (uniform) and s ∼ χ ( n ) {\displaystyle s\sim \chi (n)} (chi distribution), so that: x = s T u ∼ N n ( 0 , Σ ) {\displaystyle \mathbf {x} =s\mathbf {Tu} \sim {\mathcal {N}}_{n}(\mathbf {0} ,{\boldsymbol {\Sigma }})} (multivariate normal). Now consider:
v = T u ‖ T u ‖ = x ‖ x ‖ ∼ ACG ( Σ ) {\displaystyle \mathbf {v} ={\frac {\mathbf {Tu} }{\lVert \mathbf {Tu} \rVert }}={\frac {\mathbf {x} }{\lVert \mathbf {x} \rVert }}\sim \operatorname {ACG} ({\boldsymbol {\Sigma }})}which shows that the ACG distribution also results from applying, to uniform variates, the normalized linear transform:12
f T ( u ) = T u ‖ T u ‖ {\displaystyle f_{\mathbf {T} }(\mathbf {u} )={\frac {\mathbf {Tu} }{\lVert \mathbf {Tu} \rVert }}}Some further explanation of these two ways to obtain v ∼ ACG ( Σ ) {\displaystyle \mathbf {v} \sim \operatorname {ACG} ({\boldsymbol {\Sigma }})} may be helpful:
- If we start with x ∈ R n {\displaystyle \mathbf {x} \in \mathbb {R} ^{n}} , sampled from a multivariate normal, we can project radially onto S n − 1 {\displaystyle \mathbb {S} ^{n-1}} to obtain ACG variates. To derive the ACG density, we first do a change of variables: x ↦ ( r , v ) {\displaystyle \mathbf {x} \mapsto (r,\mathbf {v} )} , which is still an n {\displaystyle n} -dimensional representation, and this transformation induces the differential volume change factor, r n − 1 {\displaystyle r^{n-1}} , which is proportional to volume in the ( n − 1 ) {\displaystyle (n-1)} -dimensional tangent space perpendicular to x {\displaystyle \mathbf {x} } . Then, to finally obtain the ACG density on the ( n − 1 ) {\displaystyle (n-1)} -dimensional unitsphere, we need to marginalize over r {\displaystyle r} .
- If we start with u ∈ S n − 1 {\displaystyle \mathbf {u} \in \mathbb {S} ^{n-1}} , sampled from the uniform distribution, we do not need to marginalize, because we are already in n − 1 {\displaystyle n-1} dimensions. Instead, to obtain ACG variates (and the associated density), we can directly do the change of variables, v = f T ( u ) {\displaystyle \mathbf {v} =f_{\mathbf {T} }(\mathbf {u} )} , for which further details are given in the next subsection.
Caveat: when μ {\displaystyle {\boldsymbol {\mu }}} is nonzero, although s T u + μ ∼ N d ( μ , Σ ) {\displaystyle s\mathbf {Tu} +{\boldsymbol {\mu }}\sim {\mathcal {N}}_{d}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} , a similar duality does not hold:
T u + μ ‖ T u + μ ‖ ≠ s T u + μ ‖ s T u + μ ‖ ∼ P N n ( μ , Σ ) {\displaystyle {\frac {\mathbf {Tu} +{\boldsymbol {\mu }}}{\lVert \mathbf {Tu} +{\boldsymbol {\mu }}\rVert }}\neq {\frac {s\mathbf {Tu} +{\boldsymbol {\mu }}}{\lVert s\mathbf {Tu} +{\boldsymbol {\mu }}\rVert }}\sim {\mathcal {PN}}_{n}({\boldsymbol {\mu ,\Sigma }})}Although we can radially project affine-transformed normal variates to get P N n {\displaystyle {\mathcal {PN}}_{n}} variates, this does not work for uniform variates.
Wider application of the normalized linear transform
The normalized linear transform, v = f T ( u ) {\displaystyle \mathbf {v} =f_{\mathbf {T} }(\mathbf {u} )} , is a bijection from the unitsphere to itself; the inverse is u = f T − 1 ( v ) {\displaystyle \mathbf {u} =f_{\mathbf {T} ^{-1}}(\mathbf {v} )} . This transform is of independent interest, as it may be applied as a probabilistic flow on the hypersphere (similar to a normalizing flow) to generalize also other (non-uniform) distributions on hyperspheres, for example the Von Mises-Fisher distribution. The fact that we have a closed form for the ACG density allows us to recover also in closed form the differential volume change induced by this transform.
For the change of variables, v = f T ( u ) {\displaystyle \mathbf {v} =f_{\mathbf {T} }(\mathbf {u} )} on the manifold, S n − 1 {\displaystyle \mathbb {S} ^{n-1}} , the uniform and ACG densities are related as:13
p ~ ACG ( v ∣ Σ ) = p uniform R ( v , Σ ) {\displaystyle {\tilde {p}}_{\text{ACG}}(\mathbf {v} \mid {\boldsymbol {\Sigma }})={\frac {p_{\text{uniform}}}{R(\mathbf {v} ,{\boldsymbol {\Sigma }})}}}where the (constant) uniform density is p uniform = Γ ( n / 2 ) 2 π n / 2 {\displaystyle p_{\text{uniform}}={\frac {\Gamma (n/2)}{2\pi ^{n/2}}}} and where R ( v , Σ ) {\displaystyle R(\mathbf {v} ,{\boldsymbol {\Sigma }})} is the differential volume change factor from the input to the output of the transformation; specifically, it is given by the absolute value of the determinant of an ( n − 1 ) {\displaystyle (n-1)} -by- ( n − 1 ) {\displaystyle (n-1)} matrix:
R ( v , Σ ) = abs | Q v ′ J u Q u | {\displaystyle R(\mathbf {v} ,{\boldsymbol {\Sigma }})=\operatorname {abs} \left|\mathbf {Q} _{\mathbf {v} }'\mathbf {J} _{\mathbf {u} }\mathbf {Q} _{\mathbf {u} }\right|}where J u {\displaystyle \mathbf {J} _{\mathbf {u} }} is the n {\displaystyle n} -by- n {\displaystyle n} Jacobian matrix of the transformation in Euclidean space, f T : R n → R n {\displaystyle f_{\mathbf {T} }:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} , evaluated at u {\displaystyle \mathbf {u} } . In Euclidean space, the transformation and its Jacobian are non-invertible, but when the domain and co-domain are restricted to S n − 1 {\displaystyle \mathbb {S} ^{n-1}} , then f T : S n − 1 → S n − 1 {\displaystyle f_{\mathbf {T} }:\mathbb {S} ^{n-1}\to \mathbb {S} ^{n-1}} is a bijection and the induced differential volume ratio, R ( v , Σ ) {\displaystyle R(\mathbf {v} ,{\boldsymbol {\Sigma }})} is obtained by projecting J u {\displaystyle \mathbf {J} _{\mathbf {u} }} onto the ( n − 1 ) {\displaystyle (n-1)} -dimensional tangent spaces at the transformation input and output: Q u , Q v {\displaystyle \mathbf {Q} _{\mathbf {u} },\mathbf {Q} _{\mathbf {v} }} are n {\displaystyle n} -by- ( n − 1 ) {\displaystyle (n-1)} matrices whose orthonormal columns span the tangent spaces. Although the above determinant formula is relatively easy to evaluate numerically on a software platform equipped with linear algebra and automatic differentiation, a simple closed form is hard to derive directly. However, since we already have p ~ ACG {\displaystyle {\tilde {p}}_{\text{ACG}}} , we can recover:
R ( v , Σ ) = | Σ | 1 2 ( v ′ Σ − 1 v ) n 2 = abs | T | ‖ T u ‖ n {\displaystyle R(\mathbf {v} ,{\boldsymbol {\Sigma }})=\left|{\boldsymbol {\Sigma }}\right|^{\frac {1}{2}}(\mathbf {v} '{\boldsymbol {\Sigma }}^{-1}\mathbf {v} )^{\frac {n}{2}}={\frac {\operatorname {abs} \left|\mathbf {T} \right|}{\lVert \mathbf {Tu} \rVert ^{n}}}}where in the final RHS it is understood that Σ = T T ′ {\displaystyle {\boldsymbol {\Sigma }}=\mathbf {T} \mathbf {T} '} and u = f T − 1 ( v ) {\displaystyle \mathbf {u} =f_{\mathbf {T} ^{-1}}(\mathbf {v} )} .
The normalized linear transform can now be used, for example, to give a closed-form density for a more flexible distribution on the hypersphere, that is generalized from the Von Mises-Fisher. Let x ∼ VMF ( μ , κ ) {\displaystyle \mathbf {x} \sim {\text{VMF}}({\boldsymbol {\mu }},\kappa )} and v = f T ( x ) {\displaystyle \mathbf {v} =f_{\mathbf {T} }(\mathbf {x} )} ; the resulting density is:
p ( v ∣ μ , κ , T ) = p ~ VMF ( f T − 1 ( v ) ∣ μ , κ ) R ( v , T T ′ ) {\displaystyle p(\mathbf {v} \mid {\boldsymbol {\mu }},\kappa ,\mathbf {T} )={\frac {{\tilde {p}}_{\text{VMF}}{\bigl (}\mathbf {f} _{T^{-1}}(\mathbf {v} )\mid {\boldsymbol {\mu }},\kappa {\bigr )}}{R(\mathbf {v} ,\mathbf {T} \mathbf {T} ')}}}See also
Sources
- Pukkila, Tarmo M.; Rao, C. Radhakrishna (1988). "Pattern recognition based on scale invariant discriminant functions". Information Sciences. 45 (3): 379–389. doi:10.1016/0020-0255(88)90012-6.
- Hernandez-Stumpfhauser, Daniel; Breidt, F. Jay; van der Woerd, Mark J. (2017). "The General Projected Normal Distribution of Arbitrary Dimension: Modeling and Bayesian Inference". Bayesian Analysis. 12 (1): 113–133. doi:10.1214/15-BA989.
- Wang, Fangpo; Gelfand, Alan E (2013). "Directional data analysis under the general projected normal distribution". Statistical Methodology. 10 (1). Elsevier: 113–127. doi:10.1016/j.stamet.2012.07.005. PMC 3773532. PMID 24046539.
- Tyler, David E (1987). "Statistical analysis for the angular central Gaussian distribution on the sphere". Biometrika. 74 (3): 579–589. doi:10.2307/2336697.
- Sorrenson, Peter; Draxler, Felix; Rousselot, Armand; Hummerich, Sander; Köthe, Ullrich (2024). "Learning Distributions on Manifolds with Free-Form Flows". arXiv:2312.09852 [cs.LG].
References
Wang & Gelfand 2013. - Wang, Fangpo; Gelfand, Alan E (2013). "Directional data analysis under the general projected normal distribution". Statistical Methodology. 10 (1). Elsevier: 113–127. doi:10.1016/j.stamet.2012.07.005. PMC 3773532. PMID 24046539. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3773532 ↩
Pukkila & Rao 1988. - Pukkila, Tarmo M.; Rao, C. Radhakrishna (1988). "Pattern recognition based on scale invariant discriminant functions". Information Sciences. 45 (3): 379–389. doi:10.1016/0020-0255(88)90012-6. https://doi.org/10.1016%2F0020-0255%2888%2990012-6 ↩
Hernandez-Stumpfhauser, Breidt & van der Woerd 2017, p. 115. - Hernandez-Stumpfhauser, Daniel; Breidt, F. Jay; van der Woerd, Mark J. (2017). "The General Projected Normal Distribution of Arbitrary Dimension: Modeling and Bayesian Inference". Bayesian Analysis. 12 (1): 113–133. doi:10.1214/15-BA989. https://doi.org/10.1214%2F15-BA989 ↩
Pukkila & Rao 1988, p. 381. - Pukkila, Tarmo M.; Rao, C. Radhakrishna (1988). "Pattern recognition based on scale invariant discriminant functions". Information Sciences. 45 (3): 379–389. doi:10.1016/0020-0255(88)90012-6. https://doi.org/10.1016%2F0020-0255%2888%2990012-6 ↩
Hernandez-Stumpfhauser, Breidt & van der Woerd 2017, p. 117. - Hernandez-Stumpfhauser, Daniel; Breidt, F. Jay; van der Woerd, Mark J. (2017). "The General Projected Normal Distribution of Arbitrary Dimension: Modeling and Bayesian Inference". Bayesian Analysis. 12 (1): 113–133. doi:10.1214/15-BA989. https://doi.org/10.1214%2F15-BA989 ↩
Pukkila & Rao 1988, p. 381. - Pukkila, Tarmo M.; Rao, C. Radhakrishna (1988). "Pattern recognition based on scale invariant discriminant functions". Information Sciences. 45 (3): 379–389. doi:10.1016/0020-0255(88)90012-6. https://doi.org/10.1016%2F0020-0255%2888%2990012-6 ↩
Sorrenson et al. 2024, Appendix A. - Sorrenson, Peter; Draxler, Felix; Rousselot, Armand; Hummerich, Sander; Köthe, Ullrich (2024). "Learning Distributions on Manifolds with Free-Form Flows". arXiv:2312.09852 [cs.LG]. https://arxiv.org/abs/2312.09852 ↩
Hernandez-Stumpfhauser, Breidt & van der Woerd 2017, p. 115. - Hernandez-Stumpfhauser, Daniel; Breidt, F. Jay; van der Woerd, Mark J. (2017). "The General Projected Normal Distribution of Arbitrary Dimension: Modeling and Bayesian Inference". Bayesian Analysis. 12 (1): 113–133. doi:10.1214/15-BA989. https://doi.org/10.1214%2F15-BA989 ↩
Hernandez-Stumpfhauser, Breidt & van der Woerd 2017, Supplementary material, p. 1. - Hernandez-Stumpfhauser, Daniel; Breidt, F. Jay; van der Woerd, Mark J. (2017). "The General Projected Normal Distribution of Arbitrary Dimension: Modeling and Bayesian Inference". Bayesian Analysis. 12 (1): 113–133. doi:10.1214/15-BA989. https://doi.org/10.1214%2F15-BA989 ↩
Hernandez-Stumpfhauser, Breidt & van der Woerd 2017, p. 123. - Hernandez-Stumpfhauser, Daniel; Breidt, F. Jay; van der Woerd, Mark J. (2017). "The General Projected Normal Distribution of Arbitrary Dimension: Modeling and Bayesian Inference". Bayesian Analysis. 12 (1): 113–133. doi:10.1214/15-BA989. https://doi.org/10.1214%2F15-BA989 ↩
Tyler 1987. - Tyler, David E (1987). "Statistical analysis for the angular central Gaussian distribution on the sphere". Biometrika. 74 (3): 579–589. doi:10.2307/2336697. https://doi.org/10.2307%2F2336697 ↩
Tyler 1987. - Tyler, David E (1987). "Statistical analysis for the angular central Gaussian distribution on the sphere". Biometrika. 74 (3): 579–589. doi:10.2307/2336697. https://doi.org/10.2307%2F2336697 ↩
Sorrenson et al. 2024, Appendix A. - Sorrenson, Peter; Draxler, Felix; Rousselot, Armand; Hummerich, Sander; Köthe, Ullrich (2024). "Learning Distributions on Manifolds with Free-Form Flows". arXiv:2312.09852 [cs.LG]. https://arxiv.org/abs/2312.09852 ↩