Section 4.6 Bayes' Theorem
Conditional probabilities 4.5.3 can be computed using the methods developed above if the appropriate information is available. Some times you will however have some information available, such as P(A|B) but need P(B|A). The ability to "play around with history" by switching what has been presumed to occur leads to an important result known as Bayes' Theorem.Theorem 4.6.1. Bayes' Theorem.
Let S={S1,S2,...,Sm} where the Sk are pairwise disjoint and S1∪S2∪...∪Sm=S (i.e. a partition of the space S). Then for any A⊂S
The conditional probability P(Sj|A) is called the posterior probability of Sk.
Proof.
Notice, by the definition of conditional probability 4.5.3 and the multiplication rule 4.5.5
But using the disjointness of the partition
Put these two expansions together to obtain the desired result.
http://stattrek.com/probability/bayes-theorem.aspx
consider the following problem:
Marie is getting married tomorrow, at an outdoor ceremony in the desert. In recent years, it has rained only 5 days each year. Unfortunately, the weatherman has predicted rain for tomorrow. When it actually rains, the weatherman correctly forecasts rain 90% of the time. When it doesn't rain, he incorrectly forecasts rain 10% of the time. What is the probability that it will rain on the day of Marie's wedding?
Notice, all days can be classified into one of two disjoint options:
- Rainy, in which case we can deduce from the given info that P(Rain) = 5/365
- Not Rainy, and since this is the complement of above, P(Not Rain) = 360/365
Checkpoint 4.6.2. WebWork - Bayes'.
You have to be careful to extract the conditional probabilities from the problem.
Checkpoint 4.6.3. WebWork- Bigger Bayes'.
Notice that having the data expressed in tabular form sometimes makes it easier to deal with.
xxxxxxxxxx
# This function is used to convert an input string into separate entries
def g(s):
S = str(s).replace(',',' ').replace('(',' ').replace(')',' ').split()
return S
def _(Partition_Probabilities
=input_box('0.35,0.25,0.40',
label="$$ P(S_1),P(S_2),... $$",width=50),
Conditional_Probabilities
=input_box('0.02,0.01,0.03',
label='$$ P(A|S_1),P(A|S_2),... $$',width=45),
print_numbers=checkbox(True,label='Numerical Results on Graphs?'),
auto_update=False):
Partition_Probabilities = g(Partition_Probabilities)
Conditional_Probabilities = g(Conditional_Probabilities)
n = len(Partition_Probabilities)
n0 = len(Conditional_Probabilities)
if (n > n0):
pretty_print("Unmatched data input.")
else: # data streams now are the same size!
colors = rainbow(n)
accum = float(0) # whether partition probs sum to one
ends = [0] # where graphed partition sectors change
mid = [] # used for placement of text
p_Sk_given_A = [] # P( S_k | A )
pA = 0 # P(A)
PP=[] # the numerical Partition Probabilities
CP=[] # numerical Conditional Probabilities
for k in range(n):
PP.append(float(Partition_Probabilities[k]))
CP.append(float(Conditional_Probabilities[k]))
p_Sk_given_A.append(PP[k]*CP[k] )
pA += p_Sk_given_A[k]
accum = accum + PP[k]
ends.append(accum)
mid.append((ends[k]+accum)/2)
#
# From 0 to 1, saving angles for each partition sector boundary.
# Later, multiple these by 2*pi to get actual sector boundary angles.
#
if abs(accum-float(1))>0.0000001: # Due to roundoff issues
pretty_print("Sum of probabilities should equal 1.")
else: # probability data is sensible
#
# Venn diagram by drawing sectors from the angles determined above
# Create a circle of radius 1 to illustrate the the sample space S
# Sectors with varying colors and print out their names on the edge
#
G = circle((0,0), 1, rgbcolor='black',fill=False, alpha=0.4,
aspect_ratio=True,axes=False,thickness=5)
for k in range(n):
G += disk((0,0), 1, (ends[k]*2*pi, ends[k+1]*2*pi),
color=colors[mod(k,10)],alpha = 0.2)
G += text('$S_'+str(k+1)+'$',(1.1*cos(mid[k]*2*pi),
1.1*sin(mid[k]*2*pi)), rgbcolor='black')
G += circle((0,0), 0.6, facecolor='yellow', fill = True,
alpha = 0.1, thickness=5,edgecolor='black')
# probabilities corresponding to each particular region as a list
if print_numbers:
html("$P(A) = %s$"%(str(pA),))
for k in range(n):
html("$P(S_{%s} | A)$"%(str(k+1))
+"$ = %s$"%str(p_Sk_given_A[k]/pA))
G += text(str(p_Sk_given_A[k]),
(0.4*cos(mid[k]*2*pi),
0.4*sin(mid[k]*2*pi)), rgbcolor='black')
G += text(str(PP[k] - p_Sk_given_A[k]),
(0.8*cos(mid[k]*2*pi),
0.8*sin(mid[k]*2*pi)), rgbcolor='black')
# sectors now correspond in area to the Bayes Theorem probabilities
accum = float(0)
ends = [0] # where the graphed partition sectors change
mid = [] # middle of each pie chart sector
for k in range(n):
accum += float(p_Sk_given_A[k]/pA)
ends.append(accum)
mid.append((ends[k]+accum)/2)
H = circle((0,0), 1, rgbcolor='black',fill=False,
alpha=0,aspect_ratio=True,axes=False,
thickness=0)
H += circle((0,0), 0.6, facecolor='yellow',
fill=True,alpha=0.1,
aspect_ratio=True,axes=False,
thickness=5,edgecolor='black')
for k in range(n):
H += disk((0,0), 0.6, (ends[k]*2*pi, ends[k+1]*2*pi),
color=colors[mod(k,10)],alpha = 0.2)
H += text('$S_'+str(k+1)+'|A$',
(0.7*cos(mid[k]*2*pi), 0.7*sin(mid[k]*2*pi)),
rgbcolor='black')
# bayesian probabilities using the smaller set A only
if print_numbers:
for k in range(n):
H += text(str( N(p_Sk_given_A[k]/pA,digits=4) ),
(0.4*cos(mid[k]*2*pi), 0.4*sin(mid[k]*2*pi)),
rgbcolor='black')
G.show(title='Venn diagram of partition with A in middle')
print
H.show(title='Venn diagram presuming A has occured')
Checkpoint 4.6.4. Insured vs Accident.
Your automobile insurance company uses past history to determine how to set rates by measuring the number of accidents caused by clients in various age ranges. The following table summarizes the proportion of those insured and the corresponding probabilities by age range:
Age | Proportion of Insured | Probability of Accident |
16-20 | 0.05 | 0.08 |
21-25 | 0.06 | 0.07 |
26-55 | 0.49 | 0.02 |
55-65 | 0.25 | 0.03 |
over 65 | 0.15 | 0.04 |
- Determine the conditional probability that the driver was in the 16-20 age range.
- Compare this to the probability that the driver was in the 18-20 age range. Discuss the difference.
- Determine how much more the company should charge for someone in the 16-20 age range compared to someone in the 26-55 age range.
Plug the middle column into the first input box and the right column into the second input box of the Bayes Sage Cell.
Checkpoint 4.6.6. Spinal bifida odds.
Congratulations...your family is having a baby! As part of the prenatal care, some testing is part of the normal procedure including one for spinal bifida (which is a condition in which part of the spinal cord may be exposed.) Indeed, measurement of maternal serum AFP values is a standard tool used in obstetrical care to identify pregnancies that may have an increased risk for this disorder. You want to make plans for the new child's care and want to know how serious to take the test results. However, some times the test indicates that the child has the disorder when in actuality it does not (a false positive) and likewise may indicate that the child does not have the disorder when in fact it does (a false negative.)
The combined accuracy rate for the screen to detect the chromosomal abnormalities mentioned above is approximately 85% with a false positive rate of 5%. This means that (from americanpregnancy.org)
- Approximately 85 out of every 100 babies affected by the abnormalities addressed by the screen will be identified. (Positive Positive)
- Approximately 5% of all normal pregnancies will receive a positive result or an abnormal level. (False Positive)
- Given that your test came back negative, determine the likelihood that the child will actually have spinal bifida.
- Given that your test came back negative, determine the likelihood that the child will not have spina bifida
- Given that a positive test means you have a 1/100 to 1/300 chance of experiencing one of the abnormalities, determine the likelihood of spinal bifida in a randomly selected child.
You can get some help checking your arithmetic using the Bayes' Sage interact.