1
$\begingroup$

enter image description here

I tried finding the Cramer Rao Lower Bound for Variance and got (λ/n)*exp(-2λ). Then I got stuck as the UMVUE doesn't coincide with the CRLB so can't exactly find the variance of exp(-λ), I only achieved the lower bound for variance but not the exact value. Where am I going wrong? (Do I maybe have to go with the maximum likelihood approach and maybe use the invariance property with the asymptotic distribution of variance of the MLE?)

$\endgroup$
2
  • 2
    $\begingroup$ Answered here: stats.stackexchange.com/questions/436384/… $\endgroup$ Commented Feb 12 at 14:32
  • $\begingroup$ $$ \begin{align} & X_1,\ldots,X_n \\ & X_1,\text{...} X_n \end{align} $$ Whoever typed the question has the first expression above in the first line and the second in the second. I suspect the first is coded in LaTeX as \ldots or maybe just \dots and the second as .... That shows you the difference, and shows you why the \dots control sequence exists. (There's also a missing comma.) $\endgroup$ Commented Feb 12 at 18:14

1 Answer 1

3
$\begingroup$

For this family of distributions the statistic $X_1+\cdots+X_n$ is sufficient, i.e. the conditional distribution of $(X_1,\ldots,X_n)$ given $X_1+\cdots+X_n$ does not depend on $\lambda.$

The statistic $X_1+\cdots+X_n$ is also complete for this family of distributions, i.e. there is no function $g$ for which $\operatorname E(g(X_1+\cdots+X_n))$ remains equal to zero as $\lambda$ changes, except $g(x)=0$ for all values of $x.$

Therefore the Lehmann–Scheffé theorem says that the conditional expected value of any unbiased estimator of $e^{-\lambda}$ given $X_1+\cdots+X_n$ is the UMVUE. Since the question as posted doesn't say which function $t$ is, I will assume that that's the one that was intended.

Now notice that $\Pr(X_1=0) = e^{-\lambda},$ the indicator function of the event $\{\,X_1=0\,\}$ is an unbiased estimator of $e^{-\lambda}$. $$ I = \begin{cases} 1 & \text{if }X_1=0, \\ 0 & \text{otherwise.} \end{cases} $$ So we need \begin{align} & \operatorname E(I\mid X_1+\cdots + X_n) \\[6pt] = {} & \Pr(X_1=0\mid X_1+\cdots + X_n). \tag1 \\[12pt] \text{We have } & \Pr(X_1=0\mid X_1+\cdots+X_n=x) \\[6pt] = {} & \frac{\Pr(X_1=0\ \&\ X_1+\cdots +X_n=x)}{\Pr(X_1+\cdots + X_n=x)} \\[8pt] = {} & \Pr(X_1=0\mid X_1+\cdots+X_n=x) \\[8pt] = {} & \frac{\Pr(X_1=0\ \&\ {\overbrace{X_2+\cdots +X}^\text{starting with 2}}_n {} =x)}{\Pr(X_1+\cdots + X_n=x)} \\[8pt] = {} & \frac{\Pr(X_1=0)\Pr(X_2+\cdots + X_n = x)}{\Pr(X_1+\cdots +X_n=x)} \\[8pt] = {} & \frac{e^{-\lambda} \cdot((n-1)\lambda)^x e^{-(n-1)\lambda} /x!}{(n\lambda)^x e^{-n\lambda}/x!} \\[8pt] = {} & \left( \frac{n-1} n \right)^x. \\[12pt] \text{So line (1)} &\text{ above becomes:} \\[2pt] & \left( \frac{n-1} n \right)^{X_1+\cdots+X_n}. \end{align} This is what I am taking to be $T=t(X_1,\ldots,X_n).$

(The maximum-likelihood estimator of $e^{-\lambda}$ is $e^{-(X_1+\cdots+X_n)/n},$ and I suspect that has a smaller mean-squared error than the unbiased estimator above, espectially when $n$ is small.)

Now the question is: What is $ \displaystyle \operatorname{var}\left( \left( \frac{n-1} n\right)^{X_1+\cdots+X_n} \right)\text{?}$

I.e. what is $ \displaystyle \operatorname{var}\left( \left( \frac{n-1} n\right)^Y \right)$ when $Y\sim\operatorname{Poisson}(n\lambda)\text{?}$

(Possibly to be continued later$\,\ldots\,$)

We have $Y\sim\operatorname{Poisson}(\mu)$ and we want $\operatorname{var}(a^Y)$ for a number $a$ between $0$ and $1.$ \begin{align} \operatorname{var}(a^Y) = {} & \operatorname E\left( \left( a^Y \right)^2\right) - \big(\operatorname E(a^Y) \big)^2 \\[8pt] = {} & \operatorname E\big((a^2)^Y\big) - \big( \operatorname E(a^Y) \big)^2 \end{align} The problem of finding $\operatorname E(a^Y)$ and that of finding $\operatorname E((a^2)^Y)$ are both the same problem; they just have two different numbers $a$ and $a^2,$ as the base.

\begin{align} \operatorname E(a^Y) = {} & \sum_{y=0}^\infty a^y \cdot \frac{\mu^y e^{-\mu}}{y!} \\[8pt] = {} & e^{-\mu} \sum_{y=0}^\infty \frac{(a\mu)^y}{y!} \\[8pt] = {} & e^{-\mu} e^{a\mu} = e^{\mu(a-1)}. \\[8pt] \text{Similarly } & \operatorname E((a^2)^Y) = e^{\mu(a^2-1)}. \end{align} So $\operatorname{var}(a^Y) = e^{\mu(a^2-1)} - \big( e^{\mu(a-1)} \big)^2. $ With $a= \dfrac{n-1}n$ and $\mu=n\lambda$ we have $$ \operatorname{var}(a^Y) = e^{n\lambda((n-1)/n)^2-1)} - e^{n\lambda((n-1)/n-1)}. $$

$\endgroup$
1
  • $\begingroup$ Michael, I agree with the approach you sketched, which is nice and I would have given an upvote if not for the fact that OP has already found out the variance. They were confused why the variance of the UMVUE didn't match the CR lower bound, which has been explained in the other linked threads. But appreciate your answer, no doubt. $\endgroup$ Commented Feb 12 at 19:54

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.