Skip to main content

Section 8.2 Poisson Distribution

Consider a Poisson Process where you start with an interval of fixed length T and where X measures the variable number of successes, or changes, within that interval. The resulting distribution of X will be called a Poisson distribution.

For a sufficiently large natural number n, break up the given interval [0,T] into n uniform parts each of width h = T/n. Using the properties of Poisson processes, n very large implies h will be very small and eventually small enough so that

\begin{equation*} P(\text{exactly one success on a given interval}) = p = \lambda \frac{T}{n}. \end{equation*}

However, since there are a finite number of independent intervals each with probability p of containing a success then you can use a Binomial distribution to evaluate the corresponding probabilities so long as n is finite. Doing so yields and taking the limit as n approaches infinity gives:

\begin{align*} f(x) & = P(x \text{changes in} [0,T]) \\ & = \lim_{n \rightarrow \infty} \binom{n}{x} p^x (1-p)^{n-x}\\ & = \lim_{n \rightarrow \infty} \binom{n}{x} (\frac{\lambda T}{n})^x (1-\frac{\lambda T}{n})^{n-x}\\ & = \lim_{n \rightarrow \infty} \frac{n(n-1)...(n-x+1)}{x!} ( \frac{\lambda T}{n})^x (1- \frac{\lambda T}{n})^{n-x}\\ & = \frac{(\lambda T)^x}{x!} \lim_{n \rightarrow \infty} \frac{n(n-1)...(n-x+1)}{n \cdot n \cdot ... \cdot n} (1-\lambda \frac{T}{n})^{n}(1-\lambda \frac{T}{n})^{-x}\\ & = \frac{(\lambda T)^x}{x!} \lim_{n \rightarrow \infty} (1-\frac{1}{n})...(1-\frac{x-1}{n}) (1- \frac{\lambda T}{n})^{n}(1- \frac{\lambda T}{n})^{-x}\\ & = \frac{(\lambda T)^x}{x!} \lim_{n \rightarrow \infty} (1- \frac{\lambda T}{n})^{n} \lim_{n \rightarrow \infty} (1- \frac{\lambda T}{n})^{-x}\\ & = \frac{(\lambda T)^x}{x!} \lim_{n \rightarrow \infty} (1- \frac{\lambda T}{n})^{n} \cdot 1\\ & = \frac{(\lambda T)^x}{x!} e^{-\lambda T} \end{align*}

where L'Hopitals rule was utilized in the final step.

Using the Power Series expansion for the natural exponential,

\begin{align*} \sum_{x=0}^{\infty} f(x) & = \sum_{x=0}^{\infty} \frac{(\lambda T)^x}{x!} e^{-\lambda T} \\ & = e^{-\lambda T} \sum_{x=0}^{\infty} \frac{(\lambda T)^x}{x!} \\ & = e^{-\lambda T} e^{\lambda T} \\ & = 1 \end{align*}
pr
Example 8.2.3. Router Requests.

Everyone using the internet utilizes a series of "routers" who spend their time waiting for someone to show up and ask for something to be done. Let's consider one such router which, over time, has been shown to receive on average 1000 such requests in any given 10 minute period during regular working hours. In general, a Poisson process with mean 1000 would seem to fit and therefore the Poisson distribution would be a good model. We will find out below that \(\lambda T = \mu = 1000\) and will use that here to get

\begin{equation*} f(x) = \frac{1000^x}{x!} e^{-1000}. \end{equation*}

So, suppose we would like to know the likelihood of receiving exactly 1020 requests in a 10 minute time interval. This means we need

\begin{equation*} P(X = 1020) = f(1020) = \frac{1000^{1020}}{1020!} e^{-1000} \end{equation*}

which might be totally impossible to compute directly using a regular calculator. However, many graphing calculators have a built-in function where f(x) = poissonpdf(mu,x) and F(x) = poissoncdf(mu,x). To answer our question,

\begin{equation*} f(1020) = \text{poissonpdf(1000,1020)} \approx 0.01024. \end{equation*}

On the other hand, suppose the question is to ask whether 1020 or fewer requests wil be made in the 10 minute interval. If so, then

\begin{equation*} F(1020) = \text{poissoncdf(1000,1020)} \approx 0.74258. \end{equation*}
Checkpoint 8.2.4. WebWork - Poisson.

Using the f(x) generated in the previous theorem

\begin{align*} \mu & = E[X] \\ & = \sum_{x=0}^{\infty} x \cdot \frac{(\lambda T)^x}{x!} e^{-\lambda T}\\ & = \lambda T e^{-\lambda T} \sum_{x=1}^{\infty} \frac{(\lambda T)^{x-1}}{(x-1)!} \\ & = \lambda T e^{-\lambda T} \sum_{k=0}^{\infty} \frac{(\lambda T)^k}{k!} \\ & = \lambda T e^{-\lambda T} e^{\lambda T} \\ & = \lambda T \end{align*}

which confirms the use of \(\mu\) in the original probability formula.

Continuing with \(\mu = \lambda T\text{,}\) the variance is given by

\begin{align*} \sigma^2 & = E[X(X-1)] + \mu - \mu^2 \\ & = \sum_{x=0}^{\infty} x(x-1) \cdot \frac{\mu^x}{x!} e^{-\mu} + \mu - \mu^2\\ & = e^{-\mu} \mu^2 \sum_{x=2}^{\infty} \frac{\mu^{x-2}}{(x-2)!} + \mu - \mu^2\\ & = e^{-\mu} \mu^2 \sum_{k=0}^{\infty} \frac{\mu^k}{k!} + \mu - \mu^2\\ & = \mu^2 + \mu - \mu^2 \\ & = \mu \end{align*}

To derive the skewness and kurtosis, you can depend upon Sage...see the live cell below.