The zeros of the eta function include all the zeros of the zeta function: the negative even integers (real equidistant simple zeros); the zeros along the critical line, none of which are known to be multiple and over 40% of which have been proven to be simple, and the hypothetical zeros in the critical strip but not on the critical line, which if they do exist must occur at the vertices of rectangles symmetrical around the x-axis and the critical line and whose multiplicity is unknown. In addition, the factor 1 − 2 1 − s {\displaystyle 1-2^{1-s}} adds an infinite number of complex simple zeros, located at equidistant points on the line ℜ ( s ) = 1 {\displaystyle \Re (s)=1} , at s n = 1 + 2 n π i / ln ( 2 ) {\displaystyle s_{n}=1+2n\pi i/\ln(2)} where n is any nonzero integer.
The zeros of the eta function are located symmetrically with respect to the real axis and under the Riemann hypothesis would be on two parallel lines ℜ ( s ) = 1 / 2 , ℜ ( s ) = 1 {\displaystyle \Re (s)=1/2,\Re (s)=1} , and on the perpendicular half line formed by the negative real axis.
In the equation η(s) = (1 − 21−s) ζ(s), "the pole of ζ(s) at s = 1 is cancelled by the zero of the other factor" (Titchmarsh, 1986, p. 17), and as a result η(1) is neither infinite nor zero (see § Particular values). However, in the equation ζ ( s ) = η ( s ) 1 − 2 1 − s , {\displaystyle \zeta (s)={\frac {\eta (s)}{1-2^{1-s}}},} η must be zero at all the points s n = 1 + n 2 π ln 2 i , n ≠ 0 , n ∈ Z {\displaystyle s_{n}=1+n{\frac {2\pi }{\ln {2}}}i,n\neq 0,n\in \mathbb {Z} } , where the denominator is zero, if the Riemann zeta function is analytic and finite there. The problem of proving this without defining the zeta function first was signaled and left open by E. Landau in his 1909 treatise on number theory: "Whether the eta series is different from zero or not at the points s n ≠ 1 {\displaystyle s_{n}\neq 1} , i.e., whether these are poles of zeta or not, is not readily apparent here."
A first solution for Landau's problem was published almost 40 years later by D. V. Widder in his book The Laplace Transform. It uses the next prime 3 instead of 2 to define a Dirichlet series similar to the eta function, which we will call the λ {\displaystyle \lambda } function, defined for ℜ ( s ) > 0 {\displaystyle \Re (s)>0} and with some zeros also on ℜ ( s ) = 1 {\displaystyle \Re (s)=1} , but not equal to those of eta.
λ ( s ) = ( 1 − 3 3 s ) ζ ( s ) = ( 1 + 1 2 s ) − 2 3 s + ( 1 4 s + 1 5 s ) − 2 6 s + ⋯ {\displaystyle \lambda (s)=\left(1-{\frac {3}{3^{s}}}\right)\zeta (s)=\left(1+{\frac {1}{2^{s}}}\right)-{\frac {2}{3^{s}}}+\left({\frac {1}{4^{s}}}+{\frac {1}{5^{s}}}\right)-{\frac {2}{6^{s}}}+\cdots }
If s {\displaystyle s} is real and strictly positive, the series converges since the regrouped terms alternate in sign and decrease in absolute value to zero. According to a theorem on uniform convergence of Dirichlet series first proven by Cahen in 1894, the λ ( s ) {\displaystyle \lambda (s)} function is then analytic for ℜ ( s ) > 0 {\displaystyle \Re (s)>0} , a region which includes the line ℜ ( s ) = 1 {\displaystyle \Re (s)=1} . Now we can define correctly, where the denominators are not zero, ζ ( s ) = η ( s ) 1 − 2 2 s {\displaystyle \zeta (s)={\frac {\eta (s)}{1-{\frac {2}{2^{s}}}}}} or ζ ( s ) = λ ( s ) 1 − 3 3 s {\displaystyle \zeta (s)={\frac {\lambda (s)}{1-{\frac {3}{3^{s}}}}}}
Since log 3 log 2 {\displaystyle {\frac {\log 3}{\log 2}}} is irrational, the denominators in the two definitions are not zero at the same time except for s = 1 {\displaystyle s=1} , and the ζ ( s ) {\displaystyle \zeta (s)\,} function is thus well defined and analytic for ℜ ( s ) > 0 {\displaystyle \Re (s)>0} except at s = 1 {\displaystyle s=1} . We finally get indirectly that η ( s n ) = 0 {\displaystyle \eta (s_{n})=0} when s n ≠ 1 {\displaystyle s_{n}\neq 1} : η ( s n ) = ( 1 − 2 2 s n ) ζ ( s n ) = 1 − 2 2 s n 1 − 3 3 s n λ ( s n ) = 0. {\displaystyle \eta (s_{n})=\left(1-{\frac {2}{2^{s_{n}}}}\right)\zeta (s_{n})={\frac {1-{\frac {2}{2^{s_{n}}}}}{1-{\frac {3}{3^{s_{n}}}}}}\lambda (s_{n})=0.}
An elementary direct and ζ {\displaystyle \zeta \,} -independent proof of the vanishing of the eta function at s n ≠ 1 {\displaystyle s_{n}\neq 1} was published by J. Sondow in 2003. It expresses the value of the eta function as the limit of special Riemann sums associated to an integral known to be zero, using a relation between the partial sums of the Dirichlet series defining the eta and zeta functions for ℜ ( s ) > 1 {\displaystyle \Re (s)>1} .
With some simple algebra performed on finite sums, we can write for any complex s η 2 n ( s ) = ∑ k = 1 2 n ( − 1 ) k − 1 k s = 1 − 1 2 s + 1 3 s − 1 4 s + ⋯ + ( − 1 ) 2 n − 1 ( 2 n ) s = 1 + 1 2 s + 1 3 s + 1 4 s + ⋯ + 1 ( 2 n ) s − 2 ( 1 2 s + 1 4 s + ⋯ + 1 ( 2 n ) s ) = ( 1 − 2 2 s ) ζ 2 n ( s ) + 2 2 s ( 1 ( n + 1 ) s + ⋯ + 1 ( 2 n ) s ) = ( 1 − 2 2 s ) ζ 2 n ( s ) + 2 n ( 2 n ) s 1 n ( 1 ( 1 + 1 / n ) s + ⋯ + 1 ( 1 + n / n ) s ) . {\displaystyle {\begin{aligned}\eta _{2n}(s)&=\sum _{k=1}^{2n}{\frac {(-1)^{k-1}}{k^{s}}}\\&=1-{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}-{\frac {1}{4^{s}}}+\dots +{\frac {(-1)^{2n-1}}{{(2n)}^{s}}}\\[2pt]&=1+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+{\frac {1}{4^{s}}}+\dots +{\frac {1}{{(2n)}^{s}}}-2\left({\frac {1}{2^{s}}}+{\frac {1}{4^{s}}}+\dots +{\frac {1}{{(2n)}^{s}}}\right)\\[2pt]&=\left(1-{\frac {2}{2^{s}}}\right)\zeta _{2n}(s)+{\frac {2}{2^{s}}}\left({\frac {1}{{(n+1)}^{s}}}+\dots +{\frac {1}{{(2n)}^{s}}}\right)\\[2pt]&=\left(1-{\frac {2}{2^{s}}}\right)\zeta _{2n}(s)+{\frac {2n}{{(2n)}^{s}}}\,{\frac {1}{n}}\,\left({\frac {1}{{(1+1/n)}^{s}}}+\dots +{\frac {1}{{(1+n/n)}^{s}}}\right).\end{aligned}}}
Now if s = 1 + i t {\displaystyle s=1+it} and 2 s = 2 {\displaystyle 2^{s}=2} , the factor multiplying ζ 2 n ( s ) {\displaystyle \zeta _{2n}(s)} is zero, and η 2 n ( s ) = 1 n i t R n ( 1 ( 1 + x ) s , 0 , 1 ) , {\displaystyle \eta _{2n}(s)={\frac {1}{n^{it}}}R_{n}\left({\frac {1}{{(1+x)}^{s}}},0,1\right),} where Rn(f(x), a, b) denotes a special Riemann sum approximating the integral of f(x) over [a, b]. For t = 0 i.e., s = 1, we get η ( 1 ) = lim n → ∞ η 2 n ( 1 ) = lim n → ∞ R n ( 1 1 + x , 0 , 1 ) = ∫ 0 1 d x 1 + x = log 2 ≠ 0. {\displaystyle \eta (1)=\lim _{n\to \infty }\eta _{2n}(1)=\lim _{n\to \infty }R_{n}\left({\frac {1}{1+x}},0,1\right)=\int _{0}^{1}{\frac {dx}{1+x}}=\log 2\neq 0.}
Otherwise, if t ≠ 0 {\displaystyle t\neq 0} , then | n 1 − s | = | n − i t | = 1 {\displaystyle |n^{1-s}|=|n^{-it}|=1} , which yields | η ( s ) | = lim n → ∞ | η 2 n ( s ) | = lim n → ∞ | R n ( 1 ( 1 + x ) s , 0 , 1 ) | = | ∫ 0 1 d x ( 1 + x ) s | = | 2 1 − s − 1 1 − s | = | 1 − 1 − i t | = 0. {\displaystyle |\eta (s)|=\lim _{n\to \infty }|\eta _{2n}(s)|=\lim _{n\to \infty }\left|R_{n}\left({\frac {1}{{(1+x)}^{s}}},0,1\right)\right|=\left|\int _{0}^{1}{\frac {dx}{{(1+x)}^{s}}}\right|=\left|{\frac {2^{1-s}-1}{1-s}}\right|=\left|{\frac {1-1}{-it}}\right|=0.}
Assuming η ( s n ) = 0 {\displaystyle \eta (s_{n})=0} , for each point s n ≠ 1 {\displaystyle s_{n}\neq 1} where 2 s n = 2 {\displaystyle 2^{s_{n}}=2} , we can now define ζ ( s n ) {\displaystyle \zeta (s_{n})\,} by continuity as follows, ζ ( s n ) = lim s → s n η ( s ) 1 − 2 2 s = lim s → s n η ( s ) − η ( s n ) 2 2 s n − 2 2 s = lim s → s n η ( s ) − η ( s n ) s − s n s − s n 2 2 s n − 2 2 s = η ′ ( s n ) log ( 2 ) . {\displaystyle \zeta (s_{n})=\lim _{s\to s_{n}}{\frac {\eta (s)}{1-{\frac {2}{2^{s}}}}}=\lim _{s\to s_{n}}{\frac {\eta (s)-\eta (s_{n})}{{\frac {2}{2^{s_{n}}}}-{\frac {2}{2^{s}}}}}=\lim _{s\to s_{n}}{\frac {\eta (s)-\eta (s_{n})}{s-s_{n}}}\,{\frac {s-s_{n}}{{\frac {2}{2^{s_{n}}}}-{\frac {2}{2^{s}}}}}={\frac {\eta '(s_{n})}{\log(2)}}.}
The apparent singularity of zeta at s n ≠ 1 {\displaystyle s_{n}\neq 1} is now removed, and the zeta function is proven to be analytic everywhere in ℜ s > 0 {\displaystyle \Re {s}>0} , except at s = 1 {\displaystyle s=1} where lim s → 1 ( s − 1 ) ζ ( s ) = lim s → 1 η ( s ) 1 − 2 1 − s s − 1 = η ( 1 ) log 2 = 1. {\displaystyle \lim _{s\to 1}(s-1)\zeta (s)=\lim _{s\to 1}{\frac {\eta (s)}{\frac {1-2^{1-s}}{s-1}}}={\frac {\eta (1)}{\log 2}}=1.}
A number of integral formulas involving the eta function can be listed. The first one follows from a change of variable of the integral representation of the Gamma function (Abel, 1823), giving a Mellin transform which can be expressed in different ways as a double integral (Sondow, 2005). This is valid for ℜ s > 0. {\displaystyle \Re s>0.} Γ ( s ) η ( s ) = ∫ 0 ∞ x s − 1 e x + 1 d x = ∫ 0 ∞ ∫ 0 x x s − 2 e x + 1 d y d x = ∫ 0 ∞ ∫ 0 ∞ ( t + r ) s − 2 e t + r + 1 d r d t = ∫ 0 1 ∫ 0 1 ( − log ( x y ) ) s − 2 1 + x y d x d y . {\displaystyle {\begin{aligned}\Gamma (s)\eta (s)&=\int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}+1}}\,dx=\int _{0}^{\infty }\int _{0}^{x}{\frac {x^{s-2}}{e^{x}+1}}\,dy\,dx\\[8pt]&=\int _{0}^{\infty }\int _{0}^{\infty }{\frac {(t+r)^{s-2}}{e^{t+r}+1}}dr\,dt=\int _{0}^{1}\int _{0}^{1}{\frac {\left(-\log(xy)\right)^{s-2}}{1+xy}}\,dx\,dy.\end{aligned}}}
The Cauchy–Schlömilch transformation (Amdeberhan, Moll et al., 2010) can be used to prove this other representation, valid for ℜ s > − 1 {\displaystyle \Re s>-1} . Integration by parts of the first integral above in this section yields another derivation.
2 1 − s Γ ( s + 1 ) η ( s ) = 2 ∫ 0 ∞ x 2 s + 1 cosh 2 ( x 2 ) d x = ∫ 0 ∞ t s cosh 2 ( t ) d t . {\displaystyle 2^{1-s}\,\Gamma (s+1)\,\eta (s)=2\int _{0}^{\infty }{\frac {x^{2s+1}}{\cosh ^{2}(x^{2})}}\,dx=\int _{0}^{\infty }{\frac {t^{s}}{\cosh ^{2}(t)}}\,dt.}
The next formula, due to Lindelöf (1905), is valid over the whole complex plane, when the principal value is taken for the logarithm implicit in the exponential. η ( s ) = ∫ − ∞ ∞ ( 1 / 2 + i t ) − s e π t + e − π t d t . {\displaystyle \eta (s)=\int _{-\infty }^{\infty }{\frac {(1/2+it)^{-s}}{e^{\pi t}+e^{-\pi t}}}\,dt.} This corresponds to a Jensen (1895) formula for the entire function ( s − 1 ) ζ ( s ) {\displaystyle (s-1)\,\zeta (s)} , valid over the whole complex plane and also proven by Lindelöf. ( s − 1 ) ζ ( s ) = 2 π ∫ − ∞ ∞ ( 1 / 2 + i t ) 1 − s ( e π t + e − π t ) 2 d t . {\displaystyle (s-1)\zeta (s)=2\pi \,\int _{-\infty }^{\infty }{\frac {(1/2+it)^{1-s}}{(e^{\pi t}+e^{-\pi t})^{2}}}\,dt.} "This formula, remarquable by its simplicity, can be proven easily with the help of Cauchy's theorem, so important for the summation of series" wrote Jensen (1895). Similarly by converting the integration paths to contour integrals one can obtain other formulas for the eta function, such as this generalisation (Milgram, 2013) valid for 0 < c < 1 {\displaystyle 0<c<1} and all s {\displaystyle s} : η ( s ) = 1 2 ∫ − ∞ ∞ ( c + i t ) − s sin ( π ( c + i t ) ) d t . {\displaystyle \eta (s)={\frac {1}{2}}\int _{-\infty }^{\infty }{\frac {(c+it)^{-s}}{\sin {(\pi (c+it))}}}\,dt.} The zeros on the negative real axis are factored out cleanly by making c → 0 + {\displaystyle c\to 0^{+}} (Milgram, 2013) to obtain a formula valid for ℜ s < 0 {\displaystyle \Re s<0} : η ( s ) = − sin ( s π 2 ) ∫ 0 ∞ t − s sinh ( π t ) d t . {\displaystyle \eta (s)=-\sin \left({\frac {s\pi }{2}}\right)\int _{0}^{\infty }{\frac {t^{-s}}{\sinh {(\pi t)}}}\,dt.}
Most of the series acceleration techniques developed for alternating series can be profitably applied to the evaluation of the eta function. One particularly simple, yet reasonable method is to apply Euler's transformation of alternating series, to obtain η ( s ) = ∑ n = 0 ∞ 1 2 n + 1 ∑ k = 0 n ( − 1 ) k ( n k ) 1 ( k + 1 ) s . {\displaystyle \eta (s)=\sum _{n=0}^{\infty }{\frac {1}{2^{n+1}}}\sum _{k=0}^{n}(-1)^{k}{n \choose k}{\frac {1}{(k+1)^{s}}}.}
Note that the second, inside summation is a forward difference.
Peter Borwein used approximations involving Chebyshev polynomials to produce a method for efficient evaluation of the eta function.2 If d k = n ∑ ℓ = 0 k ( n + ℓ − 1 ) ! 4 ℓ ( n − ℓ ) ! ( 2 ℓ ) ! {\displaystyle d_{k}=n\sum _{\ell =0}^{k}{\frac {(n+\ell -1)!4^{\ell }}{(n-\ell )!(2\ell )!}}} then η ( s ) = − 1 d n ∑ k = 0 n − 1 ( − 1 ) k ( d k − d n ) ( k + 1 ) s + γ n ( s ) , {\displaystyle \eta (s)=-{\frac {1}{d_{n}}}\sum _{k=0}^{n-1}{\frac {(-1)^{k}(d_{k}-d_{n})}{(k+1)^{s}}}+\gamma _{n}(s),} where for ℜ ( s ) ≥ 1 2 {\displaystyle \Re (s)\geq {\frac {1}{2}}} the error term γn is bounded by | γ n ( s ) | ≤ 3 ( 3 + 8 ) n ( 1 + 2 | ℑ ( s ) | ) exp ( π 2 | ℑ ( s ) | ) . {\displaystyle |\gamma _{n}(s)|\leq {\frac {3}{(3+{\sqrt {8}})^{n}}}(1+2|\Im (s)|)\exp \left({\frac {\pi }{2}}|\Im (s)|\right).}
The factor of 3 + 8 ≈ 5.8 {\displaystyle 3+{\sqrt {8}}\approx 5.8} in the error bound indicates that the Borwein series converges quite rapidly as n increases.
Further information: Zeta constant
η ( 1 − k ) = 2 k − 1 k B k + , {\displaystyle \eta (1-k)={\frac {2^{k}-1}{k}}B_{k}^{+{}},} where B+n is the k-th Bernoulli number.
Also:
The general form for even positive integers is: η ( 2 n ) = ( − 1 ) n + 1 B 2 n π 2 n ( 2 2 n − 1 − 1 ) ( 2 n ) ! . {\displaystyle \eta (2n)=(-1)^{n+1}{{B_{2n}\pi ^{2n}\left(2^{2n-1}-1\right)} \over {(2n)!}}.}
Taking the limit n → ∞ {\displaystyle n\to \infty } , one obtains η ( ∞ ) = 1 {\displaystyle \eta (\infty )=1} .
The derivative with respect to the parameter s is for s ≠ 1 {\displaystyle s\neq 1} η ′ ( s ) = ∑ n = 1 ∞ ( − 1 ) n ln n n s = 2 1 − s ln ( 2 ) ζ ( s ) + ( 1 − 2 1 − s ) ζ ′ ( s ) . {\displaystyle \eta '(s)=\sum _{n=1}^{\infty }{\frac {(-1)^{n}\ln n}{n^{s}}}=2^{1-s}\ln(2)\,\zeta (s)+(1-2^{1-s})\,\zeta '(s).} η ′ ( 1 ) = ln ( 2 ) γ − ln ( 2 ) 2 2 − 1 {\displaystyle \eta '(1)=\ln(2)\,\gamma -\ln(2)^{2}\,2^{-1}}
Hardy, G. H. (1922). A new proof of the functional equation for the Zeta-function. Matematisk Tidsskrift. B, 71–73. http://www.jstor.org/stable/24529536 http://www.jstor.org/stable/24529536 ↩
Borwein, Peter (2000). "An efficient algorithm for the Riemann zeta function". In Théra, Michel A. (ed.). Constructive, Experimental, and Nonlinear Analysis (PDF). Conference Proceedings, Canadian Mathematical Society. Vol. 27. Providence, RI: American Mathematical Society, on behalf of the Canadian Mathematical Society. pp. 29–34. ISBN 978-0-8218-2167-1. Archived from the original (PDF) on 2011-07-26. Retrieved 2008-09-20. 978-0-8218-2167-1 ↩