Probability Trees and Conditional Expectations
Quantitative Methods
Learning Module 4: Probability Trees and Conditional Expectations
Expected Value of a Discrete Random Variable X
\[ E(X) = P(X_1)X_1 + P(X_2)X_2 + \ldots + P(X_n)X_n = \sum_{i=1}^{n} P(X_i)X_i \]
\[ E(X)=\sum_{i=1}^{n} P\left(X_{i}\right) X_{i} \tag{1} \]
Where:
- \(E(X)\): expected value of random variable \(X\)
- \(X_i\): one of \(n\) possible outcomes of the discrete random variable \(X\)
- \(P(X_i)\): probability of outcome \(X_i\)
View Markdown Source
## Expected Value of a Discrete Random Variable X
$$
E(X) = P(X_1)X_1 + P(X_2)X_2 + \ldots + P(X_n)X_n = \sum_{i=1}^{n} P(X_i)X_i
$$
$$
E(X)=\sum_{i=1}^{n} P\left(X_{i}\right) X_{i} \tag{1}
$$
Where:
* $E(X)$: expected value of random variable $X$
* $X_i$: one of $n$ possible outcomes of the discrete random variable $X$
* $P(X_i)$: probability of outcome $X_i$Variance of a Random Variable
\[ \sigma^{2}(X) =E[X-E(X)]^{2} \tag{2} \]
Where:
- \(\sigma^{2}(X)\): variance of random variable \(X\)
- \(E(X)\): expected value of \(X\)
The following equation summarizes the calculation of variance
\[ \sigma^2(X) = P(X_1)[X_1 - E(X)]^2 + P(X_2)[X_2 - E(X)]^2 + \ldots + \]
\[ + \ldots + P(X_n)[X_n - E(X)]^2 = \sum_{i=1}^{n} P(X_i)[X_i - E(X)]^2 \tag{3} \]
Simplify
\[ \sigma^2(X) =\sum_{i=1}^{n} P(X_i)[X_i-E(X)]^{2} \]
Where:
- \(\sigma^{2}(X)\): variance of random variable \(X\)
- \(E(X)\): expected value of \(X\)
- \(X_i\): one of \(n\) possible outcomes of the discrete random variable \(X\)
- \(P(X_i)\): probability of outcome \(X_i\)
View Markdown Source
## Variance of a Random Variable
$$
\sigma^{2}(X) =E[X-E(X)]^{2} \tag{2}
$$
Where:
* $\sigma^{2}(X)$: variance of random variable $X$
* $E(X)$: expected value of $X$
**The following equation summarizes the calculation of variance**
$$
\sigma^2(X) = P(X_1)[X_1 - E(X)]^2 + P(X_2)[X_2 - E(X)]^2 + \ldots +
$$
$$
+ \ldots + P(X_n)[X_n - E(X)]^2 = \sum_{i=1}^{n} P(X_i)[X_i - E(X)]^2 \tag{3}
$$
Simplify
$$
\sigma^2(X) =\sum_{i=1}^{n} P(X_i)[X_i-E(X)]^{2}
$$
Where:
* $\sigma^{2}(X)$: variance of random variable $X$
* $E(X)$: expected value of $X$
* $X_i$: one of $n$ possible outcomes of the discrete random variable $X$
* $P(X_i)$: probability of outcome $X_i$Conditional Expected Value of a Random Variable
\[ E(X \mid S)=P(X_1 \mid S) X_{1}+P(X_2 \mid S) X_{2}+\cdots+P(X_n \mid S) X_n \tag{4} \]
Where:
- \(E(X \mid S)\): expected value of random variable \(X\) given event or scenario \(S\)
- \(X_i\): one of \(n\) distinct outcomes \((X_1, X_2, \ldots, X_n)\)
- \(P(X_i \mid S)\): probability of outcome \(X_i\) given \(S\)
View Markdown Source
## Conditional Expected Value of a Random Variable
$$
E(X \mid S)=P(X_1 \mid S) X_{1}+P(X_2 \mid S) X_{2}+\cdots+P(X_n \mid S) X_n \tag{4}
$$
Where:
* $E(X \mid S)$: expected value of random variable $X$ given event or scenario $S$
* $X_i$: one of $n$ distinct outcomes $(X_1, X_2, \ldots, X_n)$
* $P(X_i \mid S)$: probability of outcome $X_i$ given $S$Total Probability Rule for Expected Value
\[ E(X) = E(X \mid S)P(S) + E(X \mid S^{C})P(S^{C}) \tag{5} \]
Where:
- \(E(X)\): unconditional expected value of \(X\)? (confirm!!!)
- \(E(X \mid S)\): expected value of \(X\) given scenario \(S\)
- \(P(S)\): probability of scenario \(S\)
- \(S^{C}\): complement of \(S\) (event or scenario \(S\) does not occur)
- \(P(S^{C})\): probability of \(S^{C}\)
View Markdown Source
## Total Probability Rule for Expected Value
$$
E(X) = E(X \mid S)P(S) + E(X \mid S^{C})P(S^{C}) \tag{5}
$$
Where:
* $E(X)$: unconditional expected value of $X$? (confirm!!!)
* $E(X \mid S)$: expected value of $X$ given scenario $S$
* $P(S)$: probability of scenario $S$
* $S^{C}$: complement of $S$ (event or scenario $S$ does not occur)
* $P(S^{C})$: probability of $S^{C}$Total Probability Rule for Expected Value (General Case)
\[ E(X) = E(X \mid S_1)P(S_1) + E(X \mid S_2)P(S_2) + \cdots + E(X \mid S_n)P(S_n) \tag{6} \]
Where:
- \(E(X)\): unconditional expected value of \(X\)
- \(E(X \mid S_i)\): expected value of \(X\) given scenario \(S_i\)
- \(P(S_i)\): probability of scenario \(S_i\)
- \(S_{1}, S_{2}, \ldots, S_{n}\): mutually exclusive and exhaustive scenarios or events
View Markdown Source
## Total Probability Rule for Expected Value (General Case)
$$
E(X) = E(X \mid S_1)P(S_1) + E(X \mid S_2)P(S_2) + \cdots + E(X \mid S_n)P(S_n) \tag{6}
$$
Where:
* $E(X)$: unconditional expected value of $X$
* $E(X \mid S_i)$: expected value of $X$ given scenario $S_i$
* $P(S_i)$: probability of scenario $S_i$
* $S_{1}, S_{2}, \ldots, S_{n}$: mutually exclusive and exhaustive scenarios or eventsTotal Probability Rule
\[ P(A)= \sum_{n} P(A \cap B_n) \tag{7} \]
Where:
- \(P(A)\): unconditional probability of event \(A\)
- \(P(A \cap B_n)\): probability of event \(A\) and event \(B_n\) occurring together
- \(B_n\): event \(n\) in a set of mutually exclusive and exhaustive events
View Markdown Source
## Total Probability Rule
$$
P(A)= \sum_{n} P(A \cap B_n) \tag{7}
$$
Where:
* $P(A)$: unconditional probability of event $A$
* $P(A \cap B_n)$: probability of event $A$ and event $B_n$ occurring together
* $B_n$: event $n$ in a set of mutually exclusive and exhaustive eventsBayes’ Formula
\[ = \frac{\text{Probability of the new information given event}}{\text{Unconditional probability of the new information}} \times \text{Prior probability of event}. \]
In probability notation, this formula can be written concisely as follows:
\[ P(\text { Event } \mid \text { Information })=\frac{P(\text { Information } \mid \text { Event })}{P(\text { Information })} P(\text { Event }) \tag{8} \]
or
\[ P(A \mid B)=\frac{P(B \mid A)}{P(B)} \times P(A) \tag{8} \]
Where:
- \(P(A \mid B)\): posterior probability of event \(A\) given information \(B\)
- \(P(B \mid A)\): probability of observing information \(B\) given event \(A\)
- \(P(A)\): prior probability of event \(A\)
- \(P(B)\): unconditional probability of information \(B\)
View Markdown Source
## Bayes' Formula
$$
= \frac{\text{Probability of the new information given event}}{\text{Unconditional probability of the new information}} \times \text{Prior probability of event}.
$$
In probability notation, this formula can be written concisely as follows:
$$
P(\text { Event } \mid \text { Information })=\frac{P(\text { Information } \mid \text { Event })}{P(\text { Information })} P(\text { Event }) \tag{8}
$$
or
$$
P(A \mid B)=\frac{P(B \mid A)}{P(B)} \times P(A) \tag{8}
$$
Where:
* $P(A \mid B)$: posterior probability of event $A$ given information $B$
* $P(B \mid A)$: probability of observing information $B$ given event $A$
* $P(A)$: prior probability of event $A$
* $P(B)$: unconditional probability of information $B$