The Rasch model for dichotomous data takes the form:
where β n {\displaystyle \beta _{n}} is the ability of person n {\displaystyle n} and δ i {\displaystyle \delta _{i}} is the difficulty of item i {\displaystyle i} .
Let x n i {\displaystyle x_{ni}} denote the observed response for person n on item i. The probability of the observed data matrix, which is the product of the probabilities of the individual responses, is given by the likelihood function
The log-likelihood function is then
where r n = ∑ i I x n i {\displaystyle r_{n}=\sum _{i}^{I}x_{ni}} is the total raw score for person n, s i = ∑ n N x n i {\displaystyle s_{i}=\sum _{n}^{N}x_{ni}} is the total raw score for item i, N is the total number of persons and I is the total number of items.
Solution equations are obtained by taking partial derivatives with respect to δ i {\displaystyle \delta _{i}} and β n {\displaystyle \beta _{n}} and setting the result equal to 0. The JML solution equations are:
where p n i = exp ( β n − δ i ) / ( 1 + exp ( β n − δ i ) ) {\displaystyle p_{ni}=\exp(\beta _{n}-\delta _{i})/(1+\exp(\beta _{n}-\delta _{i}))} .
The resulting estimates are biased, and no finite estimates exist for persons with score 0 (no correct responses) or with 100% correct responses (perfect score). The same holds for items with extreme scores, no estimates exists for these as well. This bias is due to a well known effect described by Kiefer & Wolfowitz (1956). It is of the order ( I − 1 ) / I {\displaystyle (I-1)/I} , and a more accurate (less biased) estimate of each δ i {\displaystyle \delta _{i}} is obtained by multiplying the estimates by ( I − 1 ) / I {\displaystyle (I-1)/I} .
The conditional likelihood function is defined as
in which
is the elementary symmetric function of order r, which represents the sum over all combinations of r items. For example, in the case of three items,
Details can be found in the chapters by von Davier (2016) for the dichotomous Rasch model and von Davier & Rost (1995) for the polytomous Rasch model.
Some kind of expectation-maximization algorithm is used in the estimation of the parameters of Rasch models. Algorithms for implementing Maximum Likelihood estimation commonly employ Newton–Raphson iterations to solve for solution equations obtained from setting the partial derivatives of the log-likelihood functions equal to 0. Convergence criteria are used to determine when the iterations cease. For example, the criterion might be that the mean item estimate changes by less than a certain value, such as 0.001, between one iteration and another for all items.