In its simplest form, the fast multipole method seeks to evaluate the following function:
where x α ∈ [ − 1 , 1 ] {\displaystyle x_{\alpha }\in [-1,1]} are a set of poles and ϕ α ∈ C {\displaystyle \phi _{\alpha }\in \mathbb {C} } are the corresponding pole weights on a set of points { y 1 , … , y M } {\displaystyle \{y_{1},\ldots ,y_{M}\}} with y β ∈ [ − 1 , 1 ] {\displaystyle y_{\beta }\in [-1,1]} . This is the one-dimensional form of the problem, but the algorithm can be easily generalized to multiple dimensions and kernels other than ( y − x ) − 1 {\displaystyle (y-x)^{-1}} .
Naively, evaluating f ( y ) {\displaystyle f(y)} on M {\displaystyle M} points requires O ( M N ) {\displaystyle {\mathcal {O}}(MN)} operations. The crucial observation behind the fast multipole method is that if the distance between y {\displaystyle y} and x {\displaystyle x} is large enough, then ( y − x ) − 1 {\displaystyle (y-x)^{-1}} is well-approximated by a polynomial. Specifically, let − 1 < t 1 < … < t p < 1 {\displaystyle -1<t_{1}<\ldots <t_{p}<1} be the Chebyshev nodes of order p ≥ 2 {\displaystyle p\geq 2} and let u 1 ( y ) , … , u p ( y ) {\displaystyle u_{1}(y),\ldots ,u_{p}(y)} be the corresponding Lagrange basis polynomials. One can show that the interpolating polynomial:
converges quickly with polynomial order, | ϵ p ( y ) | < 5 − p {\displaystyle |\epsilon _{p(y)}|<5^{-p}} , provided that the pole is far enough away from the region of interpolation, | x | ≥ 3 {\displaystyle |x|\geq 3} and | y | < 1 {\displaystyle |y|<1} . This is known as the "local expansion".
The speed-up of the fast multipole method derives from this interpolation: provided that all the poles are "far away", we evaluate the sum only on the Chebyshev nodes at a cost of O ( N p ) {\displaystyle {\mathcal {O}}(Np)} , and then interpolate it onto all the desired points at a cost of O ( M p ) {\displaystyle {\mathcal {O}}(Mp)} :
Since p = − log 5 ϵ {\displaystyle p=-\log _{5}\epsilon } , where ϵ {\displaystyle \epsilon } is the numerical tolerance, the total cost is O ( ( M + N ) log ( 1 / ϵ ) ) {\displaystyle {\mathcal {O}}((M+N)\log(1/\epsilon ))} .
To ensure that the poles are indeed well-separated, one recursively subdivides the unit interval such that only O ( p ) {\displaystyle {\mathcal {O}}(p)} poles end up in each interval. One then uses the explicit formula within each interval and interpolation for all intervals that are well-separated. This does not spoil the scaling, since one needs at most log ( 1 / ϵ ) {\displaystyle \log(1/\epsilon )} levels within the given tolerance.
Rokhlin, Vladimir (1985). "Rapid Solution of Integral Equations of Classic Potential Theory." J. Computational Physics Vol. 60, pp. 187–207. https://www.sciencedirect.com/science/article/pii/0021999185900026 ↩
Nader Engheta, William D. Murphy, Vladimir Rokhlin, and Marius Vassiliou (1992), “The Fast Multipole Method for Electromagnetic Scattering Computation,” IEEE Transactions on Antennas and Propagation 40, 634–641. /wiki/Nader_Engheta ↩
"The Fast Multipole Method". Archived from the original on 2011-06-03. Retrieved 2010-12-10. https://web.archive.org/web/20110603231158/http://www-theor.ch.cam.ac.uk/people/ross/thesis/node97.html ↩
Cipra, Barry Arthur (May 16, 2000). "The Best of the 20th Century: Editors Name Top 10 Algorithms". SIAM News. 33 (4). Society for Industrial and Applied Mathematics: 2. Retrieved February 27, 2019. /wiki/Barry_Arthur_Cipra ↩