Definition 4.3.1 Pairwise Disjoint Sets
\(\{ A_1, A_2, ... , A_n \}\) are pairwise disjoint provided \(A_k \cap A_j = \emptyset\) so long as \(k \ne j\text{.}\) Disjoint sets as also often called mutually exclusive.
Relative frequency gives a way to measure the proportion of "successful" outcomes when doing an experimental approach. From the interactive applications above, it appears that the relative frequency does jump around as the experiment is repeated but that the amount of variation decreases as the number of experiments increases. This is known to be true in general and is known as the "Law of Large Numbers".
We would like to formalize what these relative frequencies are approaching and will call this theoretical limit the "probability" of the outcome. In doing so, we will do our best to model our definition so that it follow the behavior of relative frequency.
To generate a general definition for probability, we need to know what is is that we measuring. In general, we will be finding the probability of sets of possible outcomes...that is, a subset of the Sample Space S. Toward that end, it is important to briefly look at some properties of sets.
\(\{ A_1, A_2, ... , A_n \}\) are pairwise disjoint provided \(A_k \cap A_j = \emptyset\) so long as \(k \ne j\text{.}\) Disjoint sets as also often called mutually exclusive.
Definition 4.3.1 Pairwise Disjoint Sets
Play around with the interactive cell below by adding and removing items in each of the three sets. Find elements so that the intersection of all three sets is empty but at least one of the paired sets are not disjoint. See if you can make all of the paired sets not disjoint but the intersection of all three disjoint. This is why we need to consider "pairwise" disjoint sets.
Consider how we might create a definition for the expectation of a given outcome. To do so, first consider a desired collection of outcomes A. If each outcome in A is chosen randomly then we might consider using a formula similar to relative frequency and set a measure of expectation to be |A|/|S|. For example, on a standard 6-sided die, the expectation of the outcome A={2} from the collection S = {1,2,3,4,5,6} could be |A|/|S| = 1/6.
From our example where we take the sum of two die, the outcome A = \{ 4,5 \} from the collection S = {2,3,4,...,12} would be
and so the expected relative frequency would be |A|/|S| = 7/36. Compare this theoretical value with the sum of the two outcomes from your experiment above.
We are ready to now formally give a name to the theoretical measure of expectation for outcomes from an experiment. Taking our cue from our examples, let's make our definition agree with the following relative frequency properties:
which leads us to the following formal definition...
The probability P(A) of a given outcome A is a set function that satisfies:
Definition 4.3.2 Probability
Using the definition above, determine the following probabilities.
Notice when you are given complete information regarding the entire data set then determining probabilities for events can be relatively easy to compute.
Based upon this definition we can immediately establish a number of results.
Theorem 4.3.4 Probability of Complements
For any event A, \(P(A) + P(A^c) = 1\)Proof
Let A be any event and note that
But \(A \cup A^c = S\text{.}\) So, by subadditivity
as desired.
\(P(\emptyset) = 0\)
Theorem 4.3.5
Proof
Note that \(\emptyset^c = S\text{.}\) So, by the theorem above,
Cancelling the 1 on both sides gives \(P(\emptyset) = 0\text{.}\)
Theorem 4.3.6
For events A and B with \(A \subset B, P(A) \le P(B)\text{.}\)Proof
Assume sets A and B satisfy \(A \subset B\text{.}\) Then, notice that
and
Therefore, by subadditivity and nonnegativity
Theorem 4.3.7
For any event A, \(P(A) \le 1\)Proof
Notice \(A \subset S\text{.}\) By the theorem above \(P(A) \le P(S) = 1\)
Theorem 4.3.8
For any sets A and B, \(P(A \cup B) = P(A) + P(B) - P(A \cap B)\)Proof
Notice that we can write \(A \cup B\) as the disjoint union
We can also write disjointly
Hence,
This result can be extended to more that two sets using a property known as inclusion-exclusion. The following two theorems illustrate this property and are presented without proof.
For any sets A, B and C,
Corollary 4.3.9
For any sets A, B, C and D,
Corollary 4.3.10
Many times, you will be dealing with making selections from a sample space where each item in the space has an equal chance of being selected. This may happen (for example) when items in the sample space are of equal size or when selecting a card from a completely shuffled deck or when coins are flipped or when a normal fair die is rolled.
It is important to notice that not all outcomes are equally likely--even in times when there are only two of them. Indeed, it is generally not an equally likely situation when picking the winner of a football game which pits, say, the New Orleans Saints professional football team with the New Orleans Home School Saints. Even though there are only two options the probability of the professional team winning in most years ought to be much greater than the chances that the high school will prevail.
When items are equally likely (sometimes also called "randomly selected") then each individual event has the same chance of being selected as any other. In this instance, determining the probability of a collection of outcomes is relatively simple.
If outcomes in S are equally likely, then for \(A \subset S,\)
Theorem 4.3.11 Probability of Equally Likely Events
Proof
Enumerate S = {\(x_1, x_2, ..., x_{|S|}\)} and note \(P( \{ x_k \} ) = c\) for some constant c since each item is equally likely. However, using each outcome as a disjoint event and the definition of probability,
and so \(c = \frac{1}{{|S|}}\text{.}\) Therefore, \(P( \{ x_k \} ) = \frac{1}{|S|}\) .
Hence, with A = {\(a_1, a_2, ..., a_{|A|}\)}, breaking up the disjoint probabilities as above gives
as desired.
Let's see if you understand the relationship between frequency and relative frequency. In this exercise, presume "Probabiity" to be the expected fraction of outcomes you might logically expect.
So, these are simple calculations.
This one is a little harder and uses the binomial coefficients from Combinatorics.
Notice how the probabilities look similar to relative frequencies. It's just the case that you are counting ALL of the individual simple possibilities that lead to a success.