The Carleman matrix of an infinitely differentiable function f ( x ) {\displaystyle f(x)} is defined as:
so as to satisfy the (Taylor series) equation:
For instance, the computation of f ( x ) {\displaystyle f(x)} by
simply amounts to the dot-product of row 1 of M [ f ] {\displaystyle M[f]} with a column vector [ 1 , x , x 2 , x 3 , . . . ] τ {\displaystyle \left[1,x,x^{2},x^{3},...\right]^{\tau }} .
The entries of M [ f ] {\displaystyle M[f]} in the next row give the 2nd power of f ( x ) {\displaystyle f(x)} :
and also, in order to have the zeroth power of f ( x ) {\displaystyle f(x)} in M [ f ] {\displaystyle M[f]} , we adopt the row 0 containing zeros everywhere except the first position, such that
Thus, the dot product of M [ f ] {\displaystyle M[f]} with the column vector [ 1 , x , x 2 , . . . ] T {\displaystyle {\begin{bmatrix}1,x,x^{2},...\end{bmatrix}}^{T}} yields the column vector [ 1 , f ( x ) , f ( x ) 2 , . . . ] T {\displaystyle \left[1,f(x),f(x)^{2},...\right]^{T}} , i.e.,
A generalization of the Carleman matrix of a function can be defined around any point, such as:
or M [ f ] x 0 = M [ g ] {\displaystyle M[f]_{x_{0}}=M[g]} where g ( x ) = f ( x + x 0 ) − x 0 {\displaystyle g(x)=f(x+x_{0})-x_{0}} . This allows the matrix power to be related as:
If we set ψ n ( x ) = x n {\displaystyle \psi _{n}(x)=x^{n}} we have the Carleman matrix. Because h ( x ) = ∑ n c n ( h ) ⋅ ψ n ( x ) = ∑ n c n ( h ) ⋅ x n {\displaystyle h(x)=\sum _{n}c_{n}(h)\cdot \psi _{n}(x)=\sum _{n}c_{n}(h)\cdot x^{n}} then we know that the n-th coefficient c n ( h ) {\displaystyle c_{n}(h)} must be the nth-coefficient of the taylor series of h {\displaystyle h} . Therefore c n ( h ) = 1 n ! h ( n ) ( 0 ) {\displaystyle c_{n}(h)={\frac {1}{n!}}h^{(n)}(0)} Therefore G [ f ] m n = c n ( ψ m ∘ f ) = c n ( f ( x ) m ) = 1 n ! [ d n d x n ( f ( x ) ) m ] x = 0 {\displaystyle G[f]_{mn}=c_{n}(\psi _{m}\circ f)=c_{n}(f(x)^{m})={\frac {1}{n!}}\left[{\frac {d^{n}}{dx^{n}}}(f(x))^{m}\right]_{x=0}} Which is the Carleman matrix given above. (It's important to note that this is not an orthornormal basis)
If { e n ( x ) } n {\displaystyle \{e_{n}(x)\}_{n}} is an orthonormal basis for a Hilbert Space with a defined inner product ⟨ f , g ⟩ {\displaystyle \langle f,g\rangle } , we can set ψ n = e n {\displaystyle \psi _{n}=e_{n}} and c n ( h ) {\displaystyle c_{n}(h)} will be ⟨ h , e n ⟩ {\displaystyle {\displaystyle \langle h,e_{n}\rangle }} . Then G [ f ] m n = c n ( e m ∘ f ) = ⟨ e m ∘ f , e n ⟩ {\displaystyle G[f]_{mn}=c_{n}(e_{m}\circ f)=\langle e_{m}\circ f,e_{n}\rangle } .
If e n ( x ) = e i n x {\displaystyle e_{n}(x)=e^{inx}} we have the analogous for Fourier Series. Let c ^ n {\displaystyle {\hat {c}}_{n}} and G ^ {\displaystyle {\hat {G}}} represent the carleman coefficient and matrix in the fourier basis. Because the basis is orthogonal, we have.
Then, therefore, G ^ [ f ] m n = c n ^ ( e m ∘ f ) = ⟨ e m ∘ f , e n ⟩ {\displaystyle {\hat {G}}[f]_{mn}={\hat {c_{n}}}(e_{m}\circ f)=\langle e_{m}\circ f,e_{n}\rangle } which is
Carleman matrices satisfy the fundamental relationship
which makes the Carleman matrix M a (direct) representation of f ( x ) {\displaystyle f(x)} . Here the term f ∘ g {\displaystyle f\circ g} denotes the composition of functions f ( g ( x ) ) {\displaystyle f(g(x))} .
Other properties include:
The Carleman matrix of a constant is:
The Carleman matrix of the identity function is:
The Carleman matrix of a constant addition is:
The Carleman matrix of the successor function is equivalent to the Binomial coefficient:
The Carleman matrix of the logarithm is related to the (signed) Stirling numbers of the first kind scaled by factorials:
The Carleman matrix of the logarithm is related to the (unsigned) Stirling numbers of the first kind scaled by factorials:
The Carleman matrix of the exponential function is related to the Stirling numbers of the second kind scaled by factorials:
The Carleman matrix of exponential functions is:
The Carleman matrix of a constant multiple is:
The Carleman matrix of a linear function is:
The Carleman matrix of a function f ( x ) = ∑ k = 1 ∞ f k x k {\displaystyle f(x)=\sum _{k=1}^{\infty }f_{k}x^{k}} is:
The Carleman matrix of a function f ( x ) = ∑ k = 0 ∞ f k x k {\displaystyle f(x)=\sum _{k=0}^{\infty }f_{k}x^{k}} is:
The Bell matrix or the Jabotinsky matrix of a function f ( x ) {\displaystyle f(x)} is defined as123
so as to satisfy the equation
These matrices were developed in 1947 by Eri Jabotinsky to represent convolutions of polynomials.4 It is the transpose of the Carleman matrix and satisfy
B [ f ∘ g ] = B [ g ] B [ f ] , {\displaystyle B[f\circ g]=B[g]B[f]~,} which makes the Bell matrix B an anti-representation of f ( x ) {\displaystyle f(x)} .
Knuth, D. (1992). "Convolution Polynomials". The Mathematica Journal. 2 (4): 67–78. arXiv:math/9207221. Bibcode:1992math......7221K. /wiki/ArXiv_(identifier) ↩
Jabotinsky, Eri (1953). "Representation of functions by matrices. Application to Faber polynomials". Proceedings of the American Mathematical Society. 4 (4): 546–553. doi:10.1090/S0002-9939-1953-0059359-0. ISSN 0002-9939. https://www.ams.org/proc/1953-004-04/S0002-9939-1953-0059359-0/ ↩
Lang, W. (2000). "On generalizations of the stirling number triangles". Journal of Integer Sequences. 3 (2.4): 1–19. Bibcode:2000JIntS...3...24L. /wiki/Bibcode_(identifier) ↩
Jabotinsky, Eri (1947). "Sur la représentation de la composition de fonctions par un produit de matrices. Applicaton à l'itération de e^x et de e^x-1". Comptes rendus de l'Académie des Sciences. 224: 323–324. ↩