Proof

Using the f(x) generated in the previous theorem
\begin{align*} \mu & = E[X] \\ & = \sum_{x=0}^{\infty} x \cdot \frac{(\lambda T)^x}{x!} e^{-\lambda T}\\ & = \lambda T e^{-\lambda T} \sum_{x=1}^{\infty} \frac{(\lambda T)^{x-1}}{(x-1)!} \\ & = \lambda T e^{-\lambda T} \sum_{k=0}^{\infty} \frac{(\lambda T)^k}{k!} \\ & = \lambda T e^{-\lambda T} e^{\lambda T} \\ & = \lambda T \end{align*}
which confirms the use of \(\mu\) in the original probability formula.
Continuing with \(\mu = \lambda T\text{,}\) the variance is given by
\begin{align*} \sigma^2 & = E[X(X-1)] + \mu - \mu^2 \\ & = \sum_{x=0}^{\infty} x(x-1) \cdot \frac{\mu^x}{x!} e^{-\mu} + \mu - \mu^2\\ & = e^{-\mu} \mu^2 \sum_{x=2}^{\infty} \frac{\mu^{x-2}}{(x-2)!} + \mu - \mu^2\\ & = e^{-\mu} \mu^2 \sum_{k=0}^{\infty} \frac{\mu^k}{k!} + \mu - \mu^2\\ & = \mu^2 + \mu - \mu^2 \\ & = \mu \end{align*}
To derive the skewness and kurtosis, you can depend upon Sage...see the live cell below.
in-context