Kernel methods become computationally unfeasible when the number of points D {\displaystyle D} is so large such that the kernel matrix K {\displaystyle K} cannot be stored in memory.
If D {\displaystyle D} is the number of training examples, the storage and computational cost required to find the solution of the problem using general kernel method is O ( D 2 ) {\displaystyle O(D^{2})} and O ( D 3 ) {\displaystyle O(D^{3})} respectively. The Nyström approximation can allow a significant speed-up of the computations.34 This speed-up is achieved by using, instead of the kernel matrix, its approximation K ~ {\displaystyle {\tilde {K}}} of rank d {\displaystyle d} . An advantage of the method is that it is not necessary to compute or store the whole kernel matrix, but only a submatrix of size d × D {\displaystyle d\times D} .
It reduces the storage and complexity requirements to O ( D d ) {\displaystyle O(Dd)} and O ( D d 2 ) {\displaystyle O(Dd^{2})} respectively.
The method is named "Nyström approximation" because it can be interpreted as a case of the Nyström method from integral equation theory.5
67: 357 8
Consider a positive-definite kernel function k : X × X → R {\displaystyle k:X\times X\to \mathbb {R} } . Given some data points x 1 , x 2 , … , x D {\displaystyle x_{1},x_{2},\dots ,x_{D}} , we can form the kernel matrix K ∈ R D × D {\displaystyle K\in \mathbb {R} ^{D\times D}} such that K i j := k ( x i , x j ) {\displaystyle K_{ij}:=k(x_{i},x_{j})} .
Now, let d ∈ 1 : D {\displaystyle d\in 1:D} be an integer, then we can divide the kernel matrix as K = [ K 11 K 12 K 21 K 22 ] ∈ R D × D {\textstyle K={\begin{bmatrix}K_{11}&K_{12}\\K_{21}&K_{22}\end{bmatrix}}\in \mathbb {R} ^{D\times D}} , where K 11 ∈ R d × d {\textstyle K_{11}\in \mathbb {R} ^{d\times d}} is the top-left corner of it. Also, set C = [ K 11 K 21 ] {\textstyle C={\begin{bmatrix}K_{11}\\K_{21}\end{bmatrix}}} to be its first d {\displaystyle d} columns.
The Nyström approximation of K {\displaystyle K} in this case is K ~ = C K 11 + C T = [ K 11 K 21 ] K 11 + [ K 11 K 12 ] {\displaystyle {\tilde {K}}=CK_{11}^{+}C^{T}={\begin{bmatrix}K_{11}\\K_{21}\end{bmatrix}}K_{11}^{+}{\begin{bmatrix}K_{11}&K_{12}\end{bmatrix}}} where K 11 + {\displaystyle K_{11}^{+}} is the Moore–Penrose pseudoinverse of K 11 {\displaystyle K_{11}} , which must exist since K 11 {\displaystyle K_{11}} is positive semidefinite.
By Mercer's theorem, we can decompose the kernel matrix as a Gram matrix: K = X T X {\textstyle K=X^{T}X} , where X ∈ R N × D {\textstyle X\in \mathbb {R} ^{N\times D}} . Let X ′ {\textstyle X'} be the left d {\textstyle d} columns of X {\textstyle X} .
Theorem—We have
In a vector and kernel notation, the problem of regularized least squares can be rewritten as: min c ∈ R n 1 n ‖ Y − K c ‖ R n 2 + λ ⟨ c , K c ⟩ R n . {\displaystyle \min _{c\in \mathbb {R} ^{n}}{\frac {1}{n}}\|Y-Kc\|_{\mathbb {R} ^{n}}^{2}+\lambda \langle c,Kc\rangle _{\mathbb {R} ^{n}}.} Computing the gradient and setting in to 0, the minimum can be obtained: − 1 n K ( Y − K c ) + λ K c = 0 ⇒ K ( K + λ n I ) c = K Y ⇒ c = ( K + λ n I ) − 1 Y , where c ∈ R n {\displaystyle {\begin{aligned}&-{\frac {1}{n}}K(Y-Kc)+\lambda Kc=0\\\Rightarrow {}&K(K+\lambda nI)c=KY\\\Rightarrow {}&c=(K+\lambda nI)^{-1}Y,{\text{ where }}c\in \mathbb {R} ^{n}\end{aligned}}} The inverse matrix ( K + λ n I ) − 1 {\displaystyle (K+\lambda nI)^{-1}} can be computed using Woodbury matrix identity: ( K + λ n I ) − 1 = 1 λ n ( 1 λ n K + I ) − 1 = 1 λ n ( I + K n , q ( λ n K q ) − 1 K n , q T ) − 1 = 1 λ n ( I − K n , q ( λ n K q + K n , q T K n , q ) − 1 K n , q T ) {\displaystyle {\begin{aligned}(K+\lambda nI)^{-1}&={\frac {1}{\lambda n}}\left({\frac {1}{\lambda n}}K+I\right)^{-1}\\&={\frac {1}{\lambda n}}\left(I+K_{n,q}({\lambda n}K_{q})^{-1}K_{n,q}^{\text{T}}\right)^{-1}\\&={\frac {1}{\lambda n}}\left(I-K_{n,q}(\lambda nK_{q}+K_{n,q}^{\text{T}}K_{n,q})^{-1}K_{n,q}^{\text{T}}\right)\end{aligned}}} It has the desired storage and complexity requirements.
Main article: Random feature
Let x , x ′ ∈ R d {\displaystyle \mathbf {x} ,\mathbf {x'} \in \mathbb {R} ^{d}} – samples of data, z : R d → R D {\displaystyle z:\mathbb {R} ^{d}\to \mathbb {R} ^{D}} – a randomized feature map (maps a single vector to a vector of higher dimensionality) so that the inner product between a pair of transformed points approximates their kernel evaluation:
K ( x , x ′ ) = ⟨ Φ ( x ) , Φ ( x ′ ) ⟩ ≈ z ( x ) T z ( x ′ ) , {\displaystyle K(\mathbf {x} ,\mathbf {x'} )=\langle \Phi (\mathbf {x} ),\Phi (\mathbf {x'} )\rangle \approx z(\mathbf {x} )^{\text{T}}z(\mathbf {x'} ),} where Φ {\displaystyle \Phi } is the mapping embedded in the RBF kernel.
Since z {\displaystyle z} is low-dimensional, the input can be easily transformed with z {\displaystyle z} , after that different linear learning methods to approximate the answer of the corresponding nonlinear kernel can be applied. There are different randomized feature maps to compute the approximations to the RBF kernels. For instance, random Fourier features and random binning features.
The random Fourier features map produces a Monte Carlo approximation to the feature map. The Monte Carlo method is considered to be randomized. These random features consists of sinusoids cos ( w T x + b ) {\displaystyle \cos(w^{\text{T}}\mathbf {x} +b)} randomly drawn from Fourier transform of the kernel to be approximated, where w ∈ R d {\displaystyle w\in \mathbb {R} ^{d}} and b ∈ R {\displaystyle b\in \mathbb {R} } are random variables. The line is randomly chosen, then the data points are projected on it by the mappings. The resulting scalar is passed through a sinusoid. The product of the transformed points will approximate a shift-invariant kernel. Since the map is smooth, random Fourier features work well on interpolation tasks.
A random binning features map partitions the input space using randomly shifted grids at randomly chosen resolutions and assigns to an input point a binary bit string that corresponds to the bins in which it falls. The grids are constructed so that the probability that two points x , x ′ ∈ R d {\displaystyle \mathbf {x} ,\mathbf {x'} \in \mathbb {R} ^{d}} are assigned to the same bin is proportional to K ( x , x ′ ) {\displaystyle K(\mathbf {x} ,\mathbf {x'} )} . The inner product between a pair of transformed points is proportional to the number of times the two points are binned together, and is therefore an unbiased estimate of K ( x , x ′ ) {\displaystyle K(\mathbf {x} ,\mathbf {x'} )} . Since this mapping is not smooth and uses the proximity between input points, Random binning features works well for approximating kernels that depend only on the L 1 {\displaystyle L_{1}} distance between datapoints.
The approaches for large-scale kernel learning (Nyström method and random features) differs in the fact that the Nyström method uses data dependent basis functions while in random features approach the basis functions are sampled from a distribution independent from the training data. This difference leads to an improved analysis for kernel learning approaches based on the Nyström method. When there is a large gap in the eigen-spectrum of the kernel matrix, approaches based on the Nyström method can achieve better results than the random features based approach.9
Bach, Francis R.; Jordan, Michael I. (7–11 August 2005). Predictive low-rank decomposition for kernel methods. International Conference on Machine Learning. Bonn Germany: ACM Press. pp. 33–40. doi:10.1145/1102351.1102356. ISBN 978-1-59593-180-1. 978-1-59593-180-1 ↩
Williams, C.K.I.; Seeger, M. (2001). "Using the Nyström method to speed up kernel machines". Advances in Neural Information Processing Systems. 13. http://papers.nips.cc/paper/1866-using-the-nystrom-method-to-speed-up-kernel-machines ↩
Drineas, Petros; Mahoney, Michael W. (2005), Auer, Peter; Meir, Ron (eds.), "Approximating a Gram Matrix for Improved Kernel-Based Learning", Learning Theory, vol. 3559, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 323–337, doi:10.1007/11503415_22, ISBN 978-3-540-26556-6, retrieved 20 January 2025 978-3-540-26556-6 ↩
Mohri, Mehryar; Rostamizadeh, Afshin; Talwalkar, Ameet (2018). Foundations of machine learning. Adaptive computation and machine learning (Second ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-03940-6. 978-0-262-03940-6 ↩
Gittens, Alex; Mahoney, Michael W. (3 June 2013), Revisiting the Nystrom Method for Improved Large-Scale Machine Learning, arXiv:1303.1849 /wiki/ArXiv_(identifier) ↩
Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin and Zhi-Hua Zhou (2012). "Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison". Advances in Neural Information Processing Systems 25 (NIPS). http://papers.nips.cc/paper/4588-nystrom-method-vs-random-fourier-features-a-theoretical-and-empirical-comparison.pdf ↩