Section 4.2 Relative Frequency
When attempting to precisely measure uncertainty one often resorts to examples or experiments that model the theoretical question of interest. Before we investigate statistical experiments, we need to create some notation that we will utilize throughout the rest of this text.
S = Universal Set or Sample Space Experiment or Outcome Space. This is the collection of all possiblilities.
Random Experiment. A random experiment is a repeatable activity that has more than one possible outcome all of which can be specified in advance but can not be known in advance with certainty.
Trial. Performing a Random Experiment one time and measuring the result.
A = Event. A collection of outcomes. Generally denoted by an upper case letter such as A, B, C, etc.
Success/Failure. When recording the result of a trial, a success for event A occurs when the outcome lies in A. If not, then the trial was a failure. There is no qualitative meaning to this term.
Mutually Exclusive Events. Two events that share no common outcomes. Also known as disjoint events.
|A| = Frequency. In a sequence of n events, the frequency is the number of trials which resulted in a success for event A.
|A| / n = Relative Frequency. A proportion of successes to total number of trials.
Histogram. A bar chart representation of data where area corresponds to the value being described.
To investigate these terms and to motivate our discussion of probability, consider flipping coins using the interactive cell below. Notice in this case, the sample space S = \{ Heads, Tails \} and the random experiment consists of flipping a fair coin one time. Each trial results in either a Head or a Tail. Since we are measuring both Heads and Tails then we will not worry about which is a success or failure. Further, on each flip the outcomes of Heads or Tails are mutually exclusive events. We count the frequencies and compute the relative frequencies for a varying number of trials selected by you as you move the slider bar. Results are displayed using a histogram.
Question 1: What do you notice as the number of flips increases?
Question 2: Why do you rarely (if ever) get exactly the same number of Heads and Tails? Would you not "expect" that to happen?
You should have noticed that as the number of flips increases, the relative frequency of Heads (and Tails) stabilized around 0.5. This makes sense intuitively since there are two options for each individual flip and 1/2 of those options are Heads while the other 1/2 is Tails.
Let’s try again by doing a random experiment consisting of rolling a single die one time. Note that the sample space in this case will be the outcomes S = { 1, 2, 3, 4, 5, 6 \.
Notice for a single die there are a larger number of options (for example 6 on a regular die) but once again the relative frequencies of each outcome was close to 1/n (i.e. 1/6 for the regular die) as the number of rolls increased.
In general, this suggests a rule: if there are n outcomes and each one has the same chance of occurring on a given trial then on average on a large number of trials the relative frequency of that outcome is 1/n. In general, if a number of outcomes are "equally likely" then this is a good model for measuring the proportion of outcomes that would be expected to have any given outcome. However, it is not always true that outcomes are equally likely. Consider rolling two die and measuring their sum:
Question 1: What do you notice as the number of rolls increases?
Question 2: What do you expect for the relative frequencies and why are they not all exactly the same?
Notice, not only are the answers not the same but they are not even close. To understand why this is different from the examples before, consider the possible outcomes from each pair of die. Since we are measuring the sum of the dice then (for a pair of standard 6-sided dice) the possible sums are from 2 to 12. However, there is only one way to get a 2--namely from a (1,1) pair--while there are 6 ways to get a 7--namely from the pairs (1,6), (2,5), (3,4), (4,3), (5,2), and (6,1). So it might make some sense that the likelihood of getting a 7 is 6 times larger than that of getting a 2. Check to see if that is the case with your experiment above.
Play with the following several times to investigate what you might expect to get when you repeatedly receive a "hand" of 5 standard playing cards. Can you imagine how you might possible enumerate the entire list of possible outcomes by hand? However, using this interactive cell, you can shuffle and deal 5-card hands over and over easily and then count the number of special poker outcomes.
Sometimes you will find it useful to keep a running total of the relative frequencies. Such a cumulative approach is often called a distribution function.
Definition 4.2.1. Cumulative relative frequency.
For a collection of ordered events \(x_1 \lt x_2 \lt ... \lt x_s\) with corresponding frequencies \(f_1, f_2, ..., f_s\text{,}\) the cumulative relative frequency is the function
\begin{equation*}
F(x) = \sum_{x_k \le x} f_{x_k}
\end{equation*}
Let’s consider the cumulative relative frequency with the sum of dice example seen at the beginning of this chapter.