In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probability, as a special case where the probability of the hypothesis given the uninformative observation is equal to the probability without. If A {\displaystyle A} is the hypothesis, and B {\displaystyle B} and C {\displaystyle C} are observations, conditional independence can be stated as an equality:
where P ( A ∣ B , C ) {\displaystyle P(A\mid B,C)} is the probability of A {\displaystyle A} given both B {\displaystyle B} and C {\displaystyle C} . Since the probability of A {\displaystyle A} given C {\displaystyle C} is the same as the probability of A {\displaystyle A} given both B {\displaystyle B} and C {\displaystyle C} , this equality expresses that B {\displaystyle B} contributes nothing to the certainty of A {\displaystyle A} . In this case, A {\displaystyle A} and B {\displaystyle B} are said to be conditionally independent given C {\displaystyle C} , written symbolically as: ( A ⊥ ⊥ B ∣ C ) {\displaystyle (A\perp \!\!\!\perp B\mid C)} .
The concept of conditional independence is essential to graph-based theories of statistical inference, as it establishes a mathematical relation between a collection of conditional statements and a graphoid.