Name | Formula | Assumptions or notes |
---|
One-sample z {\displaystyle z} -test | z = x ¯ − μ 0 ( σ / n ) {\displaystyle z={\frac {{\overline {x}}-\mu _{0}}{({\sigma }/{\sqrt {n}})}}} | (Normal population or n large) and σ known. (z is the distance from the mean in relation to the standard deviation of the mean). For non-normal distributions it is possible to calculate a minimum proportion of a population that falls within k standard deviations for any k (see: Chebyshev's inequality). |
Two-sample z-test | z = ( x ¯ 1 − x ¯ 2 ) − d 0 σ 1 2 n 1 + σ 2 2 n 2 {\displaystyle z={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{\sqrt {{\frac {\sigma _{1}^{2}}{n_{1}}}+{\frac {\sigma _{2}^{2}}{n_{2}}}}}}} | Normal population and independent observations and σ1 and σ2 are known where d 0 {\displaystyle d_{0}} is the value of μ 1 − μ 2 {\displaystyle \mu _{1}-\mu _{2}} under the null hypothesis |
One-sample t-test | t = x ¯ − μ 0 ( s / n ) , {\displaystyle t={\frac {{\overline {x}}-\mu _{0}}{(s/{\sqrt {n}})}},} d f = n − 1 {\displaystyle df=n-1\ } | (Normal population or n large) and σ {\displaystyle \sigma } unknown |
Paired t-test | t = d ¯ − d 0 ( s d / n ) , {\displaystyle t={\frac {{\overline {d}}-d_{0}}{(s_{d}/{\sqrt {n}})}},} d f = n − 1 {\displaystyle df=n-1\ } | (Normal population of differences or n large) and σ {\displaystyle \sigma } unknown |
Two-sample pooled t-test, equal variances | t = ( x ¯ 1 − x ¯ 2 ) − d 0 s p 1 n 1 + 1 n 2 , {\displaystyle t={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{s_{p}{\sqrt {{\frac {1}{n_{1}}}+{\frac {1}{n_{2}}}}}}},} s p 2 = ( n 1 − 1 ) s 1 2 + ( n 2 − 1 ) s 2 2 n 1 + n 2 − 2 , {\displaystyle s_{p}^{2}={\frac {(n_{1}-1)s_{1}^{2}+(n_{2}-1)s_{2}^{2}}{n_{1}+n_{2}-2}},} d f = n 1 + n 2 − 2 {\displaystyle df=n_{1}+n_{2}-2\ } | (Normal populations or n1 + n2 > 40) and independent observations and σ1 = σ2 unknown |
Two-sample unpooled t-test, unequal variances (Welch's t-test) | t = ( x ¯ 1 − x ¯ 2 ) − d 0 s 1 2 n 1 + s 2 2 n 2 , {\displaystyle t={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{\sqrt {{\frac {s_{1}^{2}}{n_{1}}}+{\frac {s_{2}^{2}}{n_{2}}}}}},} d f = ( s 1 2 n 1 + s 2 2 n 2 ) 2 ( s 1 2 n 1 ) 2 n 1 − 1 + ( s 2 2 n 2 ) 2 n 2 − 1 {\displaystyle df={\frac {\left({\dfrac {s_{1}^{2}}{n_{1}}}+{\dfrac {s_{2}^{2}}{n_{2}}}\right)^{2}}{{\dfrac {\left({\dfrac {s_{1}^{2}}{n_{1}}}\right)^{2}}{n_{1}-1}}+{\dfrac {\left({\dfrac {s_{2}^{2}}{n_{2}}}\right)^{2}}{n_{2}-1}}}}} | (Normal populations or n1 + n2 > 40) and independent observations and σ1 ≠ σ2 both unknown |
One-proportion z-test | z = p ^ − p 0 p 0 ( 1 − p 0 ) n {\displaystyle z={\frac {{\hat {p}}-p_{0}}{\sqrt {p_{0}(1-p_{0})}}}{\sqrt {n}}} | n .p0 > 10 and n (1 − p0) > 10 and it is a SRS (Simple Random Sample), see notes. |
Two-proportion z-test, pooled for H 0 : p 1 = p 2 {\displaystyle H_{0}\colon p_{1}=p_{2}} | z = ( p ^ 1 − p ^ 2 ) p ^ ( 1 − p ^ ) ( 1 n 1 + 1 n 2 ) {\displaystyle z={\frac {({\hat {p}}_{1}-{\hat {p}}_{2})}{\sqrt {{\hat {p}}(1-{\hat {p}})({\frac {1}{n_{1}}}+{\frac {1}{n_{2}}})}}}} p ^ = x 1 + x 2 n 1 + n 2 {\displaystyle {\hat {p}}={\frac {x_{1}+x_{2}}{n_{1}+n_{2}}}} | n1 p1 > 5 and n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and independent observations, see notes. |
Two-proportion z-test, unpooled for | d 0 | > 0 {\displaystyle |d_{0}|>0} | z = ( p ^ 1 − p ^ 2 ) − d 0 p ^ 1 ( 1 − p ^ 1 ) n 1 + p ^ 2 ( 1 − p ^ 2 ) n 2 {\displaystyle z={\frac {({\hat {p}}_{1}-{\hat {p}}_{2})-d_{0}}{\sqrt {{\frac {{\hat {p}}_{1}(1-{\hat {p}}_{1})}{n_{1}}}+{\frac {{\hat {p}}_{2}(1-{\hat {p}}_{2})}{n_{2}}}}}}} | n1 p1 > 5 and n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and independent observations, see notes. |
Chi-squared test for variance | χ 2 = ( n − 1 ) s 2 σ 0 2 {\displaystyle \chi ^{2}=(n-1){\frac {s^{2}}{\sigma _{0}^{2}}}} | df = n-1 • Normal population |
Chi-squared test for goodness of fit | χ 2 = ∑ k ( observed − expected ) 2 expected {\displaystyle \chi ^{2}=\sum _{k}{\frac {({\text{observed}}-{\text{expected}})^{2}}{\text{expected}}}} | df = k − 1 − # parameters estimated, and one of these must hold. • All expected counts are at least 5. • All expected counts are > 1 and no more than 20% of expected counts are less than 5 |
Two-sample F test for equality of variances | F = s 1 2 s 2 2 {\displaystyle F={\frac {s_{1}^{2}}{s_{2}^{2}}}} | Normal populationsArrange so s 1 2 ≥ s 2 2 {\displaystyle s_{1}^{2}\geq s_{2}^{2}} and reject H0 for F > F ( α / 2 , n 1 − 1 , n 2 − 1 ) {\displaystyle F>F(\alpha /2,n_{1}-1,n_{2}-1)} |
Regression t-test of H 0 : R 2 = 0. {\displaystyle H_{0}\colon R^{2}=0.} | t = R 2 ( n − k − 1 ∗ ) 1 − R 2 {\displaystyle t={\sqrt {\frac {R^{2}(n-k-1^{*})}{1-R^{2}}}}} | Reject H0 for t > t ( α / 2 , n − k − 1 ∗ ) {\displaystyle t>t(\alpha /2,n-k-1^{*})} *Subtract 1 for intercept; k terms contain independent variables. |
In general, the subscript 0 indicates a value taken from the null hypothesis, H0, which should be used as much as possible in constructing its test statistic. ... Definitions of other symbols: | - s 2 {\displaystyle s^{2}} = sample variance
- s 1 {\displaystyle s_{1}} = sample 1 standard deviation
- s 2 {\displaystyle s_{2}} = sample 2 standard deviation
- t {\displaystyle t} = t statistic
- d f {\displaystyle df} = degrees of freedom
- d ¯ {\displaystyle {\overline {d}}} = sample mean of differences
- d 0 {\displaystyle d_{0}} = hypothesized population mean difference
- s d {\displaystyle s_{d}} = standard deviation of differences
- χ 2 {\displaystyle \chi ^{2}} = Chi-squared statistic
| - p ^ = x n {\displaystyle {\hat {p}}={\frac {x}{n}}} = sample proportion, unless specified otherwise
- p 0 {\displaystyle p_{0}} = hypothesized population proportion
- p 1 {\displaystyle p_{1}} = proportion 1
- p 2 {\displaystyle p_{2}} = proportion 2
- d p {\displaystyle d_{p}} = hypothesized difference in proportion
- min { n 1 , n 2 } {\displaystyle \min\{n_{1},n_{2}\}} = minimum of n 1 {\textstyle n_{1}} and n 2 {\textstyle n_{2}}
- x 1 = n 1 p 1 {\displaystyle x_{1}=n_{1}p_{1}}
- x 2 = n 2 p 2 {\displaystyle x_{2}=n_{2}p_{2}}
- F {\displaystyle F} = F statistic
|
|