In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.
Cramer's rule, implemented in a naive way, is computationally inefficient for systems of more than two or three equations. In the case of n equations in n unknowns, it requires computation of n + 1 determinants, while Gaussian elimination produces the result with the same (up to a constant factor independent of n {\displaystyle n} ) computational complexity as the computation of a single determinant. Moreover, Bareiss algorithm is a simple modification of Gaussian elimination that produces in a single computation a matrix whose nonzero entries are the determinants involved in Cramer's rule.