Given two column vectors X = ( x 1 , … , x n ) T {\displaystyle X=(x_{1},\dots ,x_{n})^{T}} and Y = ( y 1 , … , y m ) T {\displaystyle Y=(y_{1},\dots ,y_{m})^{T}} of random variables with finite second moments, one may define the cross-covariance Σ X Y = cov ( X , Y ) {\displaystyle \Sigma _{XY}=\operatorname {cov} (X,Y)} to be the n × m {\displaystyle n\times m} matrix whose ( i , j ) {\displaystyle (i,j)} entry is the covariance cov ( x i , y j ) {\displaystyle \operatorname {cov} (x_{i},y_{j})} . In practice, we would estimate the covariance matrix based on sampled data from X {\displaystyle X} and Y {\displaystyle Y} (i.e. from a pair of data matrices).
Canonical-correlation analysis seeks a sequence of vectors a k {\displaystyle a_{k}} ( a k ∈ R n {\displaystyle a_{k}\in \mathbb {R} ^{n}} ) and b k {\displaystyle b_{k}} ( b k ∈ R m {\displaystyle b_{k}\in \mathbb {R} ^{m}} ) such that the random variables a k T X {\displaystyle a_{k}^{T}X} and b k T Y {\displaystyle b_{k}^{T}Y} maximize the correlation ρ = corr ( a k T X , b k T Y ) {\displaystyle \rho =\operatorname {corr} (a_{k}^{T}X,b_{k}^{T}Y)} . The (scalar) random variables U = a 1 T X {\displaystyle U=a_{1}^{T}X} and V = b 1 T Y {\displaystyle V=b_{1}^{T}Y} are the first pair of canonical variables. Then one seeks vectors maximizing the same correlation subject to the constraint that they are to be uncorrelated with the first pair of canonical variables; this gives the second pair of canonical variables. This procedure may be continued up to min { m , n } {\displaystyle \min\{m,n\}} times.
The sets of vectors a k , b k {\displaystyle a_{k},b_{k}} are called canonical directions or weight vectors or simply weights. The 'dual' sets of vectors Σ X X a k , Σ Y Y b k {\displaystyle \Sigma _{XX}a_{k},\Sigma _{YY}b_{k}} are called canonical loading vectors or simply loadings; these are often more straightforward to interpret than the weights.8
Let Σ X Y {\displaystyle \Sigma _{XY}} be the cross-covariance matrix for any pair of (vector-shaped) random variables X {\displaystyle X} and Y {\displaystyle Y} . The target function to maximize is
The first step is to define a change of basis and define
where Σ X X 1 / 2 {\displaystyle \Sigma _{XX}^{1/2}} and Σ Y Y 1 / 2 {\displaystyle \Sigma _{YY}^{1/2}} can be obtained from the eigen-decomposition (or by diagonalization):
and
Thus
By the Cauchy–Schwarz inequality, ...can someone check the this, particularly the term to the right of "(d) leq"?
There is equality if the vectors d {\displaystyle d} and Σ Y Y − 1 / 2 Σ Y X Σ X X − 1 / 2 c {\displaystyle \Sigma _{YY}^{-1/2}\Sigma _{YX}\Sigma _{XX}^{-1/2}c} are collinear. In addition, the maximum of correlation is attained if c {\displaystyle c} is the eigenvector with the maximum eigenvalue for the matrix Σ X X − 1 / 2 Σ X Y Σ Y Y − 1 Σ Y X Σ X X − 1 / 2 {\displaystyle \Sigma _{XX}^{-1/2}\Sigma _{XY}\Sigma _{YY}^{-1}\Sigma _{YX}\Sigma _{XX}^{-1/2}} (see Rayleigh quotient). The subsequent pairs are found by using eigenvalues of decreasing magnitudes. Orthogonality is guaranteed by the symmetry of the correlation matrices.
Another way of viewing this computation is that c {\displaystyle c} and d {\displaystyle d} are the left and right singular vectors of the correlation matrix of X and Y corresponding to the highest singular value.
The solution is therefore:
Reciprocally, there is also:
Reversing the change of coordinates, we have that
The canonical variables are defined by:
CCA can be computed using singular value decomposition on a correlation matrix.9 It is available as a function in10
CCA computation using singular value decomposition on a correlation matrix is related to the cosine of the angles between flats. The cosine function is ill-conditioned for small angles, leading to very inaccurate computation of highly correlated principal vectors in finite precision computer arithmetic. To fix this trouble, alternative algorithms12 are available in
Each row can be tested for significance with the following method. Since the correlations are sorted, saying that row i {\displaystyle i} is zero implies all further correlations are also zero. If we have p {\displaystyle p} independent observations in a sample and ρ ^ i {\displaystyle {\widehat {\rho }}_{i}} is the estimated correlation for i = 1 , … , min { m , n } {\displaystyle i=1,\dots ,\min\{m,n\}} . For the i {\displaystyle i} th row, the test statistic is:
which is asymptotically distributed as a chi-squared with ( m − i + 1 ) ( n − i + 1 ) {\displaystyle (m-i+1)(n-i+1)} degrees of freedom for large p {\displaystyle p} .13 Since all the correlations from min { m , n } {\displaystyle \min\{m,n\}} to p {\displaystyle p} are logically zero (and estimated that way also) the product for the terms after this point is irrelevant.
Note that in the small sample size limit with p < n + m {\displaystyle p<n+m} then we are guaranteed that the top m + n − p {\displaystyle m+n-p} correlations will be identically 1 and hence the test is meaningless.14
A typical use for canonical correlation in the experimental context is to take two sets of variables and see what is common among the two sets.15 For example, in psychological testing, one could take two well established multidimensional personality tests such as the Minnesota Multiphasic Personality Inventory (MMPI-2) and the NEO. By seeing how the MMPI-2 factors relate to the NEO factors, one could gain insight into what dimensions were common between the tests and how much variance was shared. For example, one might find that an extraversion or neuroticism dimension accounted for a substantial amount of shared variance between the two tests.
One can also use canonical-correlation analysis to produce a model equation which relates two sets of variables, for example a set of performance measures and a set of explanatory variables, or a set of outputs and set of inputs. Constraint restrictions can be imposed on such a model to ensure it reflects theoretical requirements or intuitively obvious conditions. This type of model is known as a maximum correlation model.16
Visualization of the results of canonical correlation is usually through bar plots of the coefficients of the two sets of variables for the pairs of canonical variates showing significant correlation. Some authors suggest that they are best visualized by plotting them as heliographs, a circular format with ray like bars, with each half representing the two sets of variables.17
Let X = x 1 {\displaystyle X=x_{1}} with zero expected value, i.e., E ( X ) = 0 {\displaystyle \operatorname {E} (X)=0} .
We notice that in both cases U = V {\displaystyle U=V} , which illustrates that the canonical-correlation analysis treats correlated and anticorrelated variables similarly.
Assuming that X = ( x 1 , … , x n ) T {\displaystyle X=(x_{1},\dots ,x_{n})^{T}} and Y = ( y 1 , … , y m ) T {\displaystyle Y=(y_{1},\dots ,y_{m})^{T}} have zero expected values, i.e., E ( X ) = E ( Y ) = 0 {\displaystyle \operatorname {E} (X)=\operatorname {E} (Y)=0} , their covariance matrices Σ X X = Cov ( X , X ) = E [ X X T ] {\displaystyle \Sigma _{XX}=\operatorname {Cov} (X,X)=\operatorname {E} [XX^{T}]} and Σ Y Y = Cov ( Y , Y ) = E [ Y Y T ] {\displaystyle \Sigma _{YY}=\operatorname {Cov} (Y,Y)=\operatorname {E} [YY^{T}]} can be viewed as Gram matrices in an inner product for the entries of X {\displaystyle X} and Y {\displaystyle Y} , correspondingly. In this interpretation, the random variables, entries x i {\displaystyle x_{i}} of X {\displaystyle X} and y j {\displaystyle y_{j}} of Y {\displaystyle Y} are treated as elements of a vector space with an inner product given by the covariance cov ( x i , y j ) {\displaystyle \operatorname {cov} (x_{i},y_{j})} ; see Covariance#Relationship to inner products.
The definition of the canonical variables U {\displaystyle U} and V {\displaystyle V} is then equivalent to the definition of principal vectors for the pair of subspaces spanned by the entries of X {\displaystyle X} and Y {\displaystyle Y} with respect to this inner product. The canonical correlations corr ( U , V ) {\displaystyle \operatorname {corr} (U,V)} is equal to the cosine of principal angles.
CCA can also be viewed as a special whitening transformation where the random vectors X {\displaystyle X} and Y {\displaystyle Y} are simultaneously transformed in such a way that the cross-correlation between the whitened vectors X C C A {\displaystyle X^{CCA}} and Y C C A {\displaystyle Y^{CCA}} is diagonal.18 The canonical correlations are then interpreted as regression coefficients linking X C C A {\displaystyle X^{CCA}} and Y C C A {\displaystyle Y^{CCA}} and may also be negative. The regression view of CCA also provides a way to construct a latent variable probabilistic generative model for CCA, with uncorrelated hidden variables representing shared and non-shared variability.19
Härdle, Wolfgang; Simar, Léopold (2007). "Canonical Correlation Analysis". Applied Multivariate Statistical Analysis. pp. 321–330. CiteSeerX 10.1.1.324.403. doi:10.1007/978-3-540-72244-1_14. ISBN 978-3-540-72243-4. 978-3-540-72243-4 ↩
Knapp, T. R. (1978). "Canonical correlation analysis: A general parametric significance-testing system". Psychological Bulletin. 85 (2): 410–416. doi:10.1037/0033-2909.85.2.410. /wiki/Doi_(identifier) ↩
Hotelling, H. (1936). "Relations Between Two Sets of Variates". Biometrika. 28 (3–4): 321–377. doi:10.1093/biomet/28.3-4.321. JSTOR 2333955. /wiki/Harold_Hotelling ↩
Jordan, C. (1875). "Essai sur la géométrie à n {\displaystyle n} dimensions". Bull. Soc. Math. France. 3: 103. /wiki/Camille_Jordan ↩
Andrew, Galen; Arora, Raman; Bilmes, Jeff; Livescu, Karen (2013-05-26). "Deep Canonical Correlation Analysis". Proceedings of the 30th International Conference on Machine Learning. PMLR: 1247–1255. https://proceedings.mlr.press/v28/andrew13.html ↩
Ju, Ce; Kobler, Reinmar J; Tang, Liyao; Guan, Cuntai; Kawanabe, Motoaki (2024). Deep Geodesic Canonical Correlation Analysis for Covariance-Based Neuroimaging Data. The Twelfth International Conference on Learning Representations (ICLR 2024, spotlight). https://openreview.net/pdf?id=PnR1MNen7u ↩
"Statistical Learning with Sparsity: the Lasso and Generalizations". hastie.su.domains. Retrieved 2023-09-12. https://hastie.su.domains/StatLearnSparsity/ ↩
Gu, Fei; Wu, Hao (2018-04-01). "Simultaneous canonical correlation analysis with invariant canonical loadings". Behaviormetrika. 45 (1): 111–132. doi:10.1007/s41237-017-0042-8. ISSN 1349-6964. https://doi.org/10.1007/s41237-017-0042-8 ↩
Hsu, D.; Kakade, S. M.; Zhang, T. (2012). "A spectral algorithm for learning Hidden Markov Models" (PDF). Journal of Computer and System Sciences. 78 (5): 1460. arXiv:0811.4413. doi:10.1016/j.jcss.2011.12.025. S2CID 220740158. http://www.cs.mcgill.ca/~colt2009/papers/011.pdf ↩
Huang, S. Y.; Lee, M. H.; Hsiao, C. K. (2009). "Nonlinear measures of association with kernel canonical correlation analysis and applications" (PDF). Journal of Statistical Planning and Inference. 139 (7): 2162. doi:10.1016/j.jspi.2008.10.011. Archived from the original (PDF) on 2017-03-13. Retrieved 2015-09-04. https://web.archive.org/web/20170313203427/http://www.stat.sinica.edu.tw/syhuang/papersdownload/KCCA-080906.pdf ↩
Chapman, James; Wang, Hao-Ting (2021-12-18). "CCA-Zoo: A collection of Regularized, Deep Learning based, Kernel, and Probabilistic CCA methods in a scikit-learn style framework". Journal of Open Source Software. 6 (68): 3823. Bibcode:2021JOSS....6.3823C. doi:10.21105/joss.03823. ISSN 2475-9066. https://doi.org/10.21105%2Fjoss.03823 ↩
Knyazev, A.V.; Argentati, M.E. (2002), "Principal Angles between Subspaces in an A-Based Scalar Product: Algorithms and Perturbation Estimates", SIAM Journal on Scientific Computing, 23 (6): 2009–2041, Bibcode:2002SJSC...23.2008K, CiteSeerX 10.1.1.73.2914, doi:10.1137/S1064827500377332 /wiki/Bibcode_(identifier) ↩
Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. Academic Press. /wiki/Kanti_V._Mardia ↩
Yang Song, Peter J. Schreier, David Ram´ırez, and Tanuj Hasija Canonical correlation analysis of high-dimensional data with very small sample support arXiv:1604.02047 /wiki/ArXiv_(identifier) ↩
Sieranoja, S.; Sahidullah, Md; Kinnunen, T.; Komulainen, J.; Hadid, A. (July 2018). "Audiovisual Synchrony Detection with Optimized Audio Features" (PDF). 2018 IEEE 3rd International Conference on Signal and Image Processing (ICSIP). pp. 377–381. doi:10.1109/SIPROCESS.2018.8600424. ISBN 978-1-5386-6396-7. S2CID 51682024. 978-1-5386-6396-7 ↩
Tofallis, C. (1999). "Model Building with Multiple Dependent Variables and Constraints". Journal of the Royal Statistical Society, Series D. 48 (3): 371–378. arXiv:1109.0725. doi:10.1111/1467-9884.00195. S2CID 8942357. /wiki/ArXiv_(identifier) ↩
Degani, A.; Shafto, M.; Olson, L. (2006). "Canonical Correlation Analysis: Use of Composite Heliographs for Representing Multiple Patterns" (PDF). Diagrammatic Representation and Inference. Lecture Notes in Computer Science. Vol. 4045. p. 93. CiteSeerX 10.1.1.538.5217. doi:10.1007/11783183_11. ISBN 978-3-540-35623-3. 978-3-540-35623-3 ↩
Jendoubi, T.; Strimmer, K. (2018). "A whitening approach to probabilistic canonical correlation analysis for omics data integration". BMC Bioinformatics. 20 (1): 15. arXiv:1802.03490. doi:10.1186/s12859-018-2572-9. PMC 6327589. PMID 30626338. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6327589 ↩
Jendoubi, Takoua; Strimmer, Korbinian (9 January 2019). "A whitening approach to probabilistic canonical correlation analysis for omics data integration". BMC Bioinformatics. 20 (1): 15. doi:10.1186/s12859-018-2572-9. ISSN 1471-2105. PMC 6327589. PMID 30626338. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6327589 ↩
Haghighat, Mohammad; Abdel-Mottaleb, Mohamed; Alhalabi, Wadee (2016). "Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition". IEEE Transactions on Information Forensics and Security. 11 (9): 1984–1996. doi:10.1109/TIFS.2016.2569061. S2CID 15624506. https://zenodo.org/record/889881 ↩