Section 5.5 Generating Functions
In the previous section, you were able to take a given probability distribution and create several "special" expected value metrics that provided useful or interesting facts about the particular distribution...such as the mean and variance. It is possible however to embed a lot of these metrics in a "generating function".
Definition 5.5.1. Moment Generating Function.
Given a probability function \(f(x)\text{,}\) the moment generating function is a transformation given by
\begin{equation*}
M(t) = E[e^{tx}]
\end{equation*}
where the expected value is a summation or integral dependent upon the nature of the random variable x. If the expected value does not exist (due perhaps to a f(x) with asymptotes) then the M(t) does not exist.
Definition 5.5.2. Probability Generating Function.
Given a probability function \(f(x)\text{,}\) the probability generating function is a transformation given by
\begin{equation*}
N(t) = E[t^x]
\end{equation*}
where the expected value is a summation or integral dependent upon the nature of the random variable x and again presuming that the resulting expected value exists.
Theorem 5.5.3. Relationship between \(M(t)\) and \(N(t)\).
\(M(t) = N(e^t)\) and \(N(t) = M(\ln(t))\)Proof.
Theorem 5.5.4. \(M(t)\) Properties.
\begin{equation*}
M(0)=1
\end{equation*}
\begin{equation*}
M'(0)=\mu
\end{equation*}
\begin{equation*}
M''(0)=\sigma^2 + \mu^2
\end{equation*}
Proof.
For the first result, notice
\begin{equation*}
M(0) = E[e^{0}] = E[1] = 1
\end{equation*}
is pretty trivial.
For the next results, take derivatives and use the linearity of the summation or integral. Here, let’s consider the case where \(X\) is a continuous variable and leave the discrete case for you.
\begin{equation*}
M'(t) = D_t \left [ \int_R e^{tx} f(x) dx \right ] = \int_R x e^{tx} f(x) dx
\end{equation*}
and then evaluating at t=0 gives
\begin{equation*}
M'(0) = \int_R x e^{0} f(x) dx = \mu .
\end{equation*}
Continuing,
\begin{equation*}
M''(t) = D_t \left [ \int_R x e^{tx} f(x) dx \right ] = \int_R x^2 e^{tx} f(x) dx
\end{equation*}
and evaluating at t=0 gives
\begin{equation*}
M''(0) = \int_R x^2 e^{0} f(x) dx = E[x^2] = \sigma^2 + \mu^2.
\end{equation*}
It should be noted that one may also determine the skewness and the kurtosis in a similar manner.
Theorem 5.5.5. N(t) Properties.
For discrete variable \(X\text{,}\)
\begin{equation*}
P(X=k) = \frac{N^{(k)}(0)}{k!}
\end{equation*}
Proof.
Notice, in the discrete case where \(R\) = {0, 1, 2, ... }
\begin{equation*}
N(t) = E[t^x] = \sum_R t^x f(x) = f(0) + t f(1) + t^2 f(x) + t^3 f(3) + ...
\end{equation*}
Therefore, by continually taking derivatives with respect to t and then evaluating those derivatives at t=0,
\begin{gather*}
N(0) = f(0)\\
N'(0) = f(1)\\
N''(0) = 2 f(2)\\
N'''(0) = 6 f(3)
\end{gather*}
etc.
Corollary 5.5.6. Identical Random Variables have Identical Probability Generating Functions.
If random variables \(X_1\) and \(X_2\) have the same probability generating function (and therefore the same moment generating function) then the variables have identical distributions.Proof.
Apply previous result and notice that both variables have the same probability function \(f(x)\text{.}\)