Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Conditional probability
Measure of likelihood of an event when another event is known to have occurred

In probability theory, conditional probability measures the likelihood of an event A occurring given that another event B has occurred, expressed as P(A|B) = P(A ∩ B) / P(B). For example, while the overall chance of coughing might be 5%, if a person is sick, the probability of coughing might increase to 75%. Conditional probabilities differ from absolute probabilities, and if P(A|B) = P(A), events A and B are considered independent. These probabilities often differ in direction; for instance, the chance of testing positive given dengue is about 90%, but the chance of having dengue given a positive test can be much lower due to false positives, a confusion leading to base rate fallacies. Bayes' theorem helps reverse conditional probabilities, and conditional probability tables can clarify these relationships.

We don't have any images related to Conditional probability yet.
We don't have any YouTube videos related to Conditional probability yet.
We don't have any PDF documents related to Conditional probability yet.
We don't have any Books related to Conditional probability yet.
We don't have any archived web articles related to Conditional probability yet.

Definition

Conditioning on an event

Kolmogorov definition

Given two events A and B from the sigma-field of a probability space, with the unconditional probability of B being greater than zero (i.e., P(B) > 0), the conditional probability of A given B ( P ( A ∣ B ) {\displaystyle P(A\mid B)} ) is the probability of A occurring if B has or is assumed to have happened.5 A is assumed to be the set of all possible outcomes of an experiment or random trial that has a restricted or reduced sample space. The conditional probability can be found by the quotient of the probability of the joint intersection of events A and B, that is, P ( A ∩ B ) {\displaystyle P(A\cap B)} , the probability at which A and B occur together, and the probability of B:678

P ( A ∣ B ) = P ( A ∩ B ) P ( B ) {\displaystyle P(A\mid B)={\frac {P(A\cap B)}{P(B)}}} .

For a sample space consisting of equal likelihood outcomes, the probability of the event A is understood as the fraction of the number of outcomes in A to the number of all outcomes in the sample space. Then, this equation is understood as the fraction of the set A ∩ B {\displaystyle A\cap B} to the set B. Note that the above equation is a definition, not just a theoretical result. We denote the quantity P ( A ∩ B ) P ( B ) {\displaystyle {\frac {P(A\cap B)}{P(B)}}} as P ( A ∣ B ) {\displaystyle P(A\mid B)} and call it the "conditional probability of A given B."

As an axiom of probability

Some authors, such as de Finetti, prefer to introduce conditional probability as an axiom of probability:

P ( A ∩ B ) = P ( A ∣ B ) P ( B ) {\displaystyle P(A\cap B)=P(A\mid B)P(B)} .

This equation for a conditional probability, although mathematically equivalent, may be intuitively easier to understand. It can be interpreted as "the probability of B occurring multiplied by the probability of A occurring, provided that B has occurred, is equal to the probability of the A and B occurrences together, although not necessarily occurring at the same time". Additionally, this may be preferred philosophically; under major probability interpretations, such as the subjective theory, conditional probability is considered a primitive entity. Moreover, this "multiplication rule" can be practically useful in computing the probability of A ∩ B {\displaystyle A\cap B} and introduces a symmetry with the summation axiom for Poincaré Formula:

P ( A ∪ B ) = P ( A ) + P ( B ) − P ( A ∩ B ) {\displaystyle P(A\cup B)=P(A)+P(B)-P(A\cap B)} Thus the equations can be combined to find a new representation of the : P ( A ∩ B ) = P ( A ) + P ( B ) − P ( A ∪ B ) = P ( A ∣ B ) P ( B ) {\displaystyle P(A\cap B)=P(A)+P(B)-P(A\cup B)=P(A\mid B)P(B)} P ( A ∪ B ) = P ( A ) + P ( B ) − P ( A ∣ B ) P ( B ) {\displaystyle P(A\cup B)={P(A)+P(B)-P(A\mid B){P(B)}}}

As the probability of a conditional event

Conditional probability can be defined as the probability of a conditional event A B {\displaystyle A_{B}} . The Goodman–Nguyen–Van Fraassen conditional event can be defined as:

A B = ⋃ i ≥ 1 ( ⋂ j < i B ¯ j , A i B i ) {\displaystyle A_{B}=\bigcup _{i\geq 1}\left(\bigcap _{j<i}{\overline {B}}_{j},A_{i}B_{i}\right)} , where A i {\displaystyle A_{i}} and B i {\displaystyle B_{i}} represent states or elements of A or B. 9

It can be shown that

P ( A B ) = P ( A ∩ B ) P ( B ) {\displaystyle P(A_{B})={\frac {P(A\cap B)}{P(B)}}}

which meets the Kolmogorov definition of conditional probability.10

Conditioning on an event of probability zero

If P ( B ) = 0 {\displaystyle P(B)=0} , then according to the definition, P ( A ∣ B ) {\displaystyle P(A\mid B)} is undefined.

The case of greatest interest is that of a random variable Y, conditioned on a continuous random variable X resulting in a particular outcome x. The event B = { X = x } {\displaystyle B=\{X=x\}} has probability zero and, as such, cannot be conditioned on.

Instead of conditioning on X being exactly x, we could condition on it being closer than distance ϵ {\displaystyle \epsilon } away from x. The event B = { x − ϵ < X < x + ϵ } {\displaystyle B=\{x-\epsilon <X<x+\epsilon \}} will generally have nonzero probability and hence, can be conditioned on. We can then take the limit

lim ϵ → 0 P ( A ∣ x − ϵ < X < x + ϵ ) . {\displaystyle \lim _{\epsilon \to 0}P(A\mid x-\epsilon <X<x+\epsilon ).} 1

For example, if two continuous random variables X and Y have a joint density f X , Y ( x , y ) {\displaystyle f_{X,Y}(x,y)} , then by L'Hôpital's rule and Leibniz integral rule, upon differentiation with respect to ϵ {\displaystyle \epsilon } :

lim ϵ → 0 P ( Y ∈ U ∣ x 0 − ϵ < X < x 0 + ϵ ) = lim ϵ → 0 ∫ x 0 − ϵ x 0 + ϵ ∫ U f X , Y ( x , y ) d y d x ∫ x 0 − ϵ x 0 + ϵ ∫ R f X , Y ( x , y ) d y d x = ∫ U f X , Y ( x 0 , y ) d y ∫ R f X , Y ( x 0 , y ) d y . {\displaystyle {\begin{aligned}\lim _{\epsilon \to 0}P(Y\in U\mid x_{0}-\epsilon <X<x_{0}+\epsilon )&=\lim _{\epsilon \to 0}{\frac {\int _{x_{0}-\epsilon }^{x_{0}+\epsilon }\int _{U}f_{X,Y}(x,y)\mathrm {d} y\mathrm {d} x}{\int _{x_{0}-\epsilon }^{x_{0}+\epsilon }\int _{\mathbb {R} }f_{X,Y}(x,y)\mathrm {d} y\mathrm {d} x}}\\&={\frac {\int _{U}f_{X,Y}(x_{0},y)\mathrm {d} y}{\int _{\mathbb {R} }f_{X,Y}(x_{0},y)\mathrm {d} y}}.\end{aligned}}}

The resulting limit is the conditional probability distribution of Y given X and exists when the denominator, the probability density f X ( x 0 ) {\displaystyle f_{X}(x_{0})} , is strictly positive.

It is tempting to define the undefined probability P ( A ∣ X = x ) {\displaystyle P(A\mid X=x)} using limit (1), but this cannot be done in a consistent manner. In particular, it is possible to find random variables X and W and values x, w such that the events { X = x } {\displaystyle \{X=x\}} and { W = w } {\displaystyle \{W=w\}} are identical but the resulting limits are not:

lim ϵ → 0 P ( A ∣ x − ϵ ≤ X ≤ x + ϵ ) ≠ lim ϵ → 0 P ( A ∣ w − ϵ ≤ W ≤ w + ϵ ) . {\displaystyle \lim _{\epsilon \to 0}P(A\mid x-\epsilon \leq X\leq x+\epsilon )\neq \lim _{\epsilon \to 0}P(A\mid w-\epsilon \leq W\leq w+\epsilon ).}

The Borel–Kolmogorov paradox demonstrates this with a geometrical argument.

Conditioning on a discrete random variable

See also: Conditional probability distribution, Conditional expectation, and Regular conditional probability

Let X be a discrete random variable and its possible outcomes denoted V. For example, if X represents the value of a rolled dice then V is the set { 1 , 2 , 3 , 4 , 5 , 6 } {\displaystyle \{1,2,3,4,5,6\}} . Let us assume for the sake of presentation that X is a discrete random variable, so that each value in V has a nonzero probability.

For a value x in V and an event A, the conditional probability is given by P ( A ∣ X = x ) {\displaystyle P(A\mid X=x)} . Writing

c ( x , A ) = P ( A ∣ X = x ) {\displaystyle c(x,A)=P(A\mid X=x)}

for short, we see that it is a function of two variables, x and A.

For a fixed A, we can form the random variable Y = c ( X , A ) {\displaystyle Y=c(X,A)} . It represents an outcome of P ( A ∣ X = x ) {\displaystyle P(A\mid X=x)} whenever a value x of X is observed.

The conditional probability of A given X can thus be treated as a random variable Y with outcomes in the interval [ 0 , 1 ] {\displaystyle [0,1]} . From the law of total probability, its expected value is equal to the unconditional probability of A.

Partial conditional probability

The partial conditional probability P ( A ∣ B 1 ≡ b 1 , … , B m ≡ b m ) {\displaystyle P(A\mid B_{1}\equiv b_{1},\ldots ,B_{m}\equiv b_{m})} is about the probability of event A {\displaystyle A} given that each of the condition events B i {\displaystyle B_{i}} has occurred to a degree b i {\displaystyle b_{i}} (degree of belief, degree of experience) that might be different from 100%. Frequentistically, partial conditional probability makes sense, if the conditions are tested in experiment repetitions of appropriate length n {\displaystyle n} .11 Such n {\displaystyle n} -bounded partial conditional probability can be defined as the conditionally expected average occurrence of event A {\displaystyle A} in testbeds of length n {\displaystyle n} that adhere to all of the probability specifications B i ≡ b i {\displaystyle B_{i}\equiv b_{i}} , i.e.:

P n ( A ∣ B 1 ≡ b 1 , … , B m ≡ b m ) = E ⁡ ( A ¯ n ∣ B ¯ 1 n = b 1 , … , B ¯ m n = b m ) {\displaystyle P^{n}(A\mid B_{1}\equiv b_{1},\ldots ,B_{m}\equiv b_{m})=\operatorname {E} ({\overline {A}}^{n}\mid {\overline {B}}_{1}^{n}=b_{1},\ldots ,{\overline {B}}_{m}^{n}=b_{m})} 12

Based on that, partial conditional probability can be defined as

P ( A ∣ B 1 ≡ b 1 , … , B m ≡ b m ) = lim n → ∞ P n ( A ∣ B 1 ≡ b 1 , … , B m ≡ b m ) , {\displaystyle P(A\mid B_{1}\equiv b_{1},\ldots ,B_{m}\equiv b_{m})=\lim _{n\to \infty }P^{n}(A\mid B_{1}\equiv b_{1},\ldots ,B_{m}\equiv b_{m}),}

where b i n ∈ N {\displaystyle b_{i}n\in \mathbb {N} } 13

Jeffrey conditionalization1415 is a special case of partial conditional probability, in which the condition events must form a partition:

P ( A ∣ B 1 ≡ b 1 , … , B m ≡ b m ) = ∑ i = 1 m b i P ( A ∣ B i ) {\displaystyle P(A\mid B_{1}\equiv b_{1},\ldots ,B_{m}\equiv b_{m})=\sum _{i=1}^{m}b_{i}P(A\mid B_{i})}

Example

Suppose that somebody secretly rolls two fair six-sided dice, and we wish to compute the probability that the face-up value of the first one is 2, given the information that their sum is no greater than 5.

  • Let D1 be the value rolled on dice 1.
  • Let D2 be the value rolled on dice 2.

Probability that D1 = 2

Table 1 shows the sample space of 36 combinations of rolled values of the two dice, each of which occurs with probability 1/36, with the numbers displayed in the red and dark gray cells being D1 + D2.

D1 = 2 in exactly 6 of the 36 outcomes; thus P(D1 = 2) = 6⁄36 = 1⁄6:

Table 1
+D2
123456
D11234567
2345678
3456789
45678910
567891011
6789101112

Probability that D1 + D2 ≤ 5

Table 2 shows that D1 + D2 ≤ 5 for exactly 10 of the 36 outcomes, thus P(D1 + D2 ≤ 5) = 10⁄36:

Table 2
+D2
123456
D11234567
2345678
3456789
45678910
567891011
6789101112

Probability that D1 = 2 given that D1 + D2 ≤ 5

Table 3 shows that for 3 of these 10 outcomes, D1 = 2.

Thus, the conditional probability P(D1 = 2 | D1+D2 ≤ 5) = 3⁄10 = 0.3:

Table 3
+D2
123456
D11234567
2345678
3456789
45678910
567891011
6789101112

Here, in the earlier notation for the definition of conditional probability, the conditioning event B is that D1 + D2 ≤ 5, and the event A is D1 = 2. We have P ( A ∣ B ) = P ( A ∩ B ) P ( B ) = 3 / 36 10 / 36 = 3 10 , {\displaystyle P(A\mid B)={\tfrac {P(A\cap B)}{P(B)}}={\tfrac {3/36}{10/36}}={\tfrac {3}{10}},} as seen in the table.

Use in inference

In statistical inference, the conditional probability is an update of the probability of an event based on new information.16 The new information can be incorporated as follows:17

  • Let A, the event of interest, be in the sample space, say (X,P).
  • The occurrence of the event A knowing that event B has or will have occurred, means the occurrence of A as it is restricted to B, i.e. A ∩ B {\displaystyle A\cap B} .
  • Without the knowledge of the occurrence of B, the information about the occurrence of A would simply be P(A)
  • The probability of A knowing that event B has or will have occurred, will be the probability of A ∩ B {\displaystyle A\cap B} relative to P(B), the probability that B has occurred.
  • This results in P ( A ∣ B ) = P ( A ∩ B ) / P ( B ) {\textstyle P(A\mid B)=P(A\cap B)/P(B)} whenever P(B) > 0 and 0 otherwise.

This approach results in a probability measure that is consistent with the original probability measure and satisfies all the Kolmogorov axioms. This conditional probability measure also could have resulted by assuming that the relative magnitude of the probability of A with respect to X will be preserved with respect to B (cf. a Formal Derivation below).

The wording "evidence" or "information" is generally used in the Bayesian interpretation of probability. The conditioning event is interpreted as evidence for the conditioned event. That is, P(A) is the probability of A before accounting for evidence E, and P(A|E) is the probability of A after having accounted for evidence E or after having updated P(A). This is consistent with the frequentist interpretation, which is the first definition given above.

Example

When Morse code is transmitted, there is a certain probability that the "dot" or "dash" that was received is erroneous. This is often taken as interference in the transmission of a message. Therefore, it is important to consider when sending a "dot", for example, the probability that a "dot" was received. This is represented by: P ( dot sent  |  dot received ) = P ( dot received  |  dot sent ) P ( dot sent ) P ( dot received ) . {\displaystyle P({\text{dot sent }}|{\text{ dot received}})=P({\text{dot received }}|{\text{ dot sent}}){\frac {P({\text{dot sent}})}{P({\text{dot received}})}}.} In Morse code, the ratio of dots to dashes is 3:4 at the point of sending, so the probability of a "dot" and "dash" are P ( dot sent ) = 3 7   a n d   P ( dash sent ) = 4 7 {\displaystyle P({\text{dot sent}})={\frac {3}{7}}\ and\ P({\text{dash sent}})={\frac {4}{7}}} . If it is assumed that the probability that a dot is transmitted as a dash is 1/10, and that the probability that a dash is transmitted as a dot is likewise 1/10, then Bayes's rule can be used to calculate P ( dot received ) {\displaystyle P({\text{dot received}})} .

P ( dot received ) = P ( dot received  ∩  dot sent ) + P ( dot received  ∩  dash sent ) {\displaystyle P({\text{dot received}})=P({\text{dot received }}\cap {\text{ dot sent}})+P({\text{dot received }}\cap {\text{ dash sent}})}

P ( dot received ) = P ( dot received  ∣  dot sent ) P ( dot sent ) + P ( dot received  ∣  dash sent ) P ( dash sent ) {\displaystyle P({\text{dot received}})=P({\text{dot received }}\mid {\text{ dot sent}})P({\text{dot sent}})+P({\text{dot received }}\mid {\text{ dash sent}})P({\text{dash sent}})}

P ( dot received ) = 9 10 × 3 7 + 1 10 × 4 7 = 31 70 {\displaystyle P({\text{dot received}})={\frac {9}{10}}\times {\frac {3}{7}}+{\frac {1}{10}}\times {\frac {4}{7}}={\frac {31}{70}}}

Now, P ( dot sent  ∣  dot received ) {\displaystyle P({\text{dot sent }}\mid {\text{ dot received}})} can be calculated:

P ( dot sent  ∣  dot received ) = P ( dot received  ∣  dot sent ) P ( dot sent ) P ( dot received ) = 9 10 × 3 7 31 70 = 27 31 {\displaystyle P({\text{dot sent }}\mid {\text{ dot received}})=P({\text{dot received }}\mid {\text{ dot sent}}){\frac {P({\text{dot sent}})}{P({\text{dot received}})}}={\frac {9}{10}}\times {\frac {\frac {3}{7}}{\frac {31}{70}}}={\frac {27}{31}}} 18

Statistical independence

Main article: Independence (probability theory)

Events A and B are defined to be statistically independent if the probability of the intersection of A and B is equal to the product of the probabilities of A and B:

P ( A ∩ B ) = P ( A ) P ( B ) . {\displaystyle P(A\cap B)=P(A)P(B).}

If P(B) is not zero, then this is equivalent to the statement that

P ( A ∣ B ) = P ( A ) . {\displaystyle P(A\mid B)=P(A).}

Similarly, if P(A) is not zero, then

P ( B ∣ A ) = P ( B ) {\displaystyle P(B\mid A)=P(B)}

is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical in A and B. Independence does not refer to a disjoint event.19

It should also be noted that given the independent event pair [A B] and an event C, the pair is defined to be conditionally independent if the product holds true:20

P ( A B ∣ C ) = P ( A ∣ C ) P ( B ∣ C ) {\displaystyle P(AB\mid C)=P(A\mid C)P(B\mid C)}

This theorem could be useful in applications where multiple independent events are being observed.

Independent events vs. mutually exclusive events

The concepts of mutually independent events and mutually exclusive events are separate and distinct. The following table contrasts results for the two cases (provided that the probability of the conditioning event is not zero).

If statistically independentIf mutually exclusive
P ( A ∣ B ) = {\displaystyle P(A\mid B)=} P ( A ) {\displaystyle P(A)} 0
P ( B ∣ A ) = {\displaystyle P(B\mid A)=} P ( B ) {\displaystyle P(B)} 0
P ( A ∩ B ) = {\displaystyle P(A\cap B)=} P ( A ) P ( B ) {\displaystyle P(A)P(B)} 0

In fact, mutually exclusive events cannot be statistically independent (unless both of them are impossible), since knowing that one occurs gives information about the other (in particular, that the latter will certainly not occur).

Common fallacies

These fallacies should not be confused with Robert K. Shope's 1978 "conditional fallacy", which deals with counterfactual examples that beg the question.

Assuming conditional probability is of similar size to its inverse

Main article: Confusion of the inverse

In general, it cannot be assumed that P(A|B) ≈ P(B|A). This can be an insidious error, even for those who are highly conversant with statistics.21 The relationship between P(A|B) and P(B|A) is given by Bayes' theorem:

P ( B ∣ A ) = P ( A ∣ B ) P ( B ) P ( A ) ⇔ P ( B ∣ A ) P ( A ∣ B ) = P ( B ) P ( A ) {\displaystyle {\begin{aligned}P(B\mid A)&={\frac {P(A\mid B)P(B)}{P(A)}}\\\Leftrightarrow {\frac {P(B\mid A)}{P(A\mid B)}}&={\frac {P(B)}{P(A)}}\end{aligned}}}

That is, P(A|B) ≈ P(B|A) only if P(B)/P(A) ≈ 1, or equivalently, P(A) ≈ P(B).

Assuming marginal and conditional probabilities are of similar size

In general, it cannot be assumed that P(A) ≈ P(A|B). These probabilities are linked through the law of total probability:

P ( A ) = ∑ n P ( A ∩ B n ) = ∑ n P ( A ∣ B n ) P ( B n ) . {\displaystyle P(A)=\sum _{n}P(A\cap B_{n})=\sum _{n}P(A\mid B_{n})P(B_{n}).}

where the events ( B n ) {\displaystyle (B_{n})} form a countable partition of Ω {\displaystyle \Omega } .

This fallacy may arise through selection bias.22 For example, in the context of a medical claim, let SC be the event that a sequela (chronic disease) S occurs as a consequence of circumstance (acute condition) C. Let H be the event that an individual seeks medical help. Suppose that in most cases, C does not cause S (so that P(SC) is low). Suppose also that medical attention is only sought if S has occurred due to C. From experience of patients, a doctor may therefore erroneously conclude that P(SC) is high. The actual probability observed by the doctor is P(SC|H).

Over- or under-weighting priors

Not taking prior probability into account partially or completely is called base rate neglect. The reverse, insufficient adjustment from the prior probability is conservatism.

Formal derivation

Formally, P(A | B) is defined as the probability of A according to a new probability function on the sample space, such that outcomes not in B have probability 0 and that it is consistent with all original probability measures.2324

Let Ω be a discrete sample space with elementary events {ω}, and let P be the probability measure with respect to the σ-algebra of Ω. Suppose we are told that the event B ⊆ Ω has occurred. A new probability distribution (denoted by the conditional notation) is to be assigned on {ω} to reflect this. All events that are not in B will have null probability in the new distribution. For events in B, two conditions must be met: the probability of B is one and the relative magnitudes of the probabilities must be preserved. The former is required by the axioms of probability, and the latter stems from the fact that the new probability measure has to be the analog of P in which the probability of B is one - and every event that is not in B, therefore, has a null probability. Hence, for some scale factor α, the new distribution must satisfy:

  1. ω ∈ B : P ( ω ∣ B ) = α P ( ω ) {\displaystyle \omega \in B:P(\omega \mid B)=\alpha P(\omega )}
  2. ω ∉ B : P ( ω ∣ B ) = 0 {\displaystyle \omega \notin B:P(\omega \mid B)=0}
  3. ∑ ω ∈ Ω P ( ω ∣ B ) = 1. {\displaystyle \sum _{\omega \in \Omega }{P(\omega \mid B)}=1.}

Substituting 1 and 2 into 3 to select α:

1 = ∑ ω ∈ Ω P ( ω ∣ B ) = ∑ ω ∈ B P ( ω ∣ B ) + ∑ ω ∉ B P ( ω ∣ B ) 0 = α ∑ ω ∈ B P ( ω ) = α ⋅ P ( B ) ⇒ α = 1 P ( B ) {\displaystyle {\begin{aligned}1&=\sum _{\omega \in \Omega }{P(\omega \mid B)}\\&=\sum _{\omega \in B}{P(\omega \mid B)}+{\cancelto {0}{\sum _{\omega \notin B}P(\omega \mid B)}}\\&=\alpha \sum _{\omega \in B}{P(\omega )}\\[5pt]&=\alpha \cdot P(B)\\[5pt]\Rightarrow \alpha &={\frac {1}{P(B)}}\end{aligned}}}

So the new probability distribution is

  1. ω ∈ B : P ( ω ∣ B ) = P ( ω ) P ( B ) {\displaystyle \omega \in B:P(\omega \mid B)={\frac {P(\omega )}{P(B)}}}
  2. ω ∉ B : P ( ω ∣ B ) = 0 {\displaystyle \omega \notin B:P(\omega \mid B)=0}

Now for a general event A,

P ( A ∣ B ) = ∑ ω ∈ A ∩ B P ( ω ∣ B ) + ∑ ω ∈ A ∩ B c P ( ω ∣ B ) 0 = ∑ ω ∈ A ∩ B P ( ω ) P ( B ) = P ( A ∩ B ) P ( B ) {\displaystyle {\begin{aligned}P(A\mid B)&=\sum _{\omega \in A\cap B}{P(\omega \mid B)}+{\cancelto {0}{\sum _{\omega \in A\cap B^{c}}P(\omega \mid B)}}\\&=\sum _{\omega \in A\cap B}{\frac {P(\omega )}{P(B)}}\\[5pt]&={\frac {P(A\cap B)}{P(B)}}\end{aligned}}}

See also

  • Mathematics portal
Wikimedia Commons has media related to Conditional probability.

References

  1. Gut, Allan (2013). Probability: A Graduate Course (Second ed.). New York, NY: Springer. ISBN 978-1-4614-4707-8. 978-1-4614-4707-8

  2. "Conditional Probability". www.mathsisfun.com. Retrieved 2020-09-11. https://www.mathsisfun.com/data/probability-events-conditional.html

  3. Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Meester, Ludolf Erwin (2005). "A Modern Introduction to Probability and Statistics". Springer Texts in Statistics: 26. doi:10.1007/1-84628-168-7. ISBN 978-1-85233-896-1. ISSN 1431-875X. 978-1-85233-896-1

  4. Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Meester, Ludolf Erwin (2005). "A Modern Introduction to Probability and Statistics". Springer Texts in Statistics: 25–40. doi:10.1007/1-84628-168-7. ISBN 978-1-85233-896-1. ISSN 1431-875X. 978-1-85233-896-1

  5. Reichl, Linda Elizabeth (2016). "2.3 Probability". A Modern Course in Statistical Physics (4th revised and updated ed.). WILEY-VCH. ISBN 978-3-527-69049-7. 978-3-527-69049-7

  6. "Conditional Probability". www.mathsisfun.com. Retrieved 2020-09-11. https://www.mathsisfun.com/data/probability-events-conditional.html

  7. Kolmogorov, Andrey (1956), Foundations of the Theory of Probability, Chelsea

  8. "Conditional Probability". www.stat.yale.edu. Retrieved 2020-09-11. http://www.stat.yale.edu/Courses/1997-98/101/condprob.htm

  9. Flaminio, Tommaso; Godo, Lluis; Hosni, Hykel (2020-09-01). "Boolean algebras of conditionals, probability and logic". Artificial Intelligence. 286: 103347. arXiv:2006.04673. doi:10.1016/j.artint.2020.103347. ISSN 0004-3702. S2CID 214584872. https://www.sciencedirect.com/science/article/pii/S000437022030103X

  10. Van Fraassen, Bas C. (1976), Harper, William L.; Hooker, Clifford Alan (eds.), "Probabilities of Conditionals", Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science: Volume I Foundations and Philosophy of Epistemic Applications of Probability Theory, The University of Western Ontario Series in Philosophy of Science, Dordrecht: Springer Netherlands, pp. 261–308, doi:10.1007/978-94-010-1853-1_10, ISBN 978-94-010-1853-1, retrieved 2021-12-04 978-94-010-1853-1

  11. Draheim, Dirk (2017). "Generalized Jeffrey Conditionalization (A Frequentist Semantics of Partial Conditionalization)". Springer. Retrieved December 19, 2017. http://fpc.formcharts.org

  12. Draheim, Dirk (2017). "Generalized Jeffrey Conditionalization (A Frequentist Semantics of Partial Conditionalization)". Springer. Retrieved December 19, 2017. http://fpc.formcharts.org

  13. Draheim, Dirk (2017). "Generalized Jeffrey Conditionalization (A Frequentist Semantics of Partial Conditionalization)". Springer. Retrieved December 19, 2017. http://fpc.formcharts.org

  14. Jeffrey, Richard C. (1983), The Logic of Decision, 2nd edition, University of Chicago Press, ISBN 9780226395821 9780226395821

  15. "Bayesian Epistemology". Stanford Encyclopedia of Philosophy. 2017. Retrieved December 29, 2017. https://plato.stanford.edu/entries/epistemology-bayesian/

  16. Casella, George; Berger, Roger L. (2002). Statistical Inference. Duxbury Press. ISBN 0-534-24312-6. 0-534-24312-6

  17. Gut, Allan (2013). Probability: A Graduate Course (Second ed.). New York, NY: Springer. ISBN 978-1-4614-4707-8. 978-1-4614-4707-8

  18. "Conditional Probability and Independence" (PDF). Retrieved 2021-12-22. http://www.math.ntu.edu.tw/~hchen/teaching/StatInference/notes/lecture4.pdf

  19. Tijms, Henk (2012). Understanding Probability (3 ed.). Cambridge: Cambridge University Press. doi:10.1017/cbo9781139206990. ISBN 978-1-107-65856-1. 978-1-107-65856-1

  20. Pfeiffer, Paul E. (1978). Conditional Independence in Applied Probability. Boston, MA: Birkhäuser Boston. ISBN 978-1-4612-6335-7. OCLC 858880328. 978-1-4612-6335-7

  21. Paulos, J.A. (1988) Innumeracy: Mathematical Illiteracy and its Consequences, Hill and Wang. ISBN 0-8090-7447-8 (p. 63 et seq.) /wiki/ISBN_(identifier)

  22. F. Thomas Bruss Der Wyatt-Earp-Effekt oder die betörende Macht kleiner Wahrscheinlichkeiten (in German), Spektrum der Wissenschaft (German Edition of Scientific American), Vol 2, 110–113, (2007). /wiki/F._Thomas_Bruss

  23. George Casella and Roger L. Berger (1990), Statistical Inference, Duxbury Press, ISBN 0-534-11958-1 (p. 18 et seq.) /wiki/ISBN_(identifier)

  24. Grinstead and Snell's Introduction to Probability, p. 134 http://math.dartmouth.edu/~prob/prob/prob.pdf