For this family of distributions the statistic $X_1+\cdots+X_n$ is sufficient, i.e. the conditional distribution of $(X_1,\ldots,X_n)$ given $X_1+\cdots+X_n$ does not depend on $\lambda.$
The statistic $X_1+\cdots+X_n$ is also complete for this family of distributions, i.e. there is no function $g$ for which $\operatorname E(g(X_1+\cdots+X_n))$ remains equal to zero as $\lambda$ changes, except $g(x)=0$ for all values of $x.$
Therefore the Lehmann–Scheffé theorem says that the conditional expected value of any unbiased estimator of $e^{-\lambda}$ given $X_1+\cdots+X_n$ is the UMVUE. Since the question as posted doesn't say which function $t$ is, I will assume that that's the one that was intended.
Now notice that $\Pr(X_1=0) = e^{-\lambda},$ the indicator function of the event $\{\,X_1=0\,\}$ is an unbiased estimator of $e^{-\lambda}$. $$ I = \begin{cases} 1 & \text{if }X_1=0, \\ 0 & \text{otherwise.} \end{cases} $$ So we need \begin{align} & \operatorname E(I\mid X_1+\cdots + X_n) \\[6pt] = {} & \Pr(X_1=0\mid X_1+\cdots + X_n). \tag1 \\[12pt] \text{We have } & \Pr(X_1=0\mid X_1+\cdots+X_n=x) \\[6pt] = {} & \frac{\Pr(X_1=0\ \&\ X_1+\cdots +X_n=x)}{\Pr(X_1+\cdots + X_n=x)} \\[8pt] = {} & \Pr(X_1=0\mid X_1+\cdots+X_n=x) \\[8pt] = {} & \frac{\Pr(X_1=0\ \&\ {\overbrace{X_2+\cdots +X}^\text{starting with 2}}_n {} =x)}{\Pr(X_1+\cdots + X_n=x)} \\[8pt] = {} & \frac{\Pr(X_1=0)\Pr(X_2+\cdots + X_n = x)}{\Pr(X_1+\cdots +X_n=x)} \\[8pt] = {} & \frac{e^{-\lambda} \cdot((n-1)\lambda)^x e^{-(n-1)\lambda} /x!}{(n\lambda)^x e^{-n\lambda}/x!} \\[8pt] = {} & \left( \frac{n-1} n \right)^x. \\[12pt] \text{So line (1)} &\text{ above becomes:} \\[2pt] & \left( \frac{n-1} n \right)^{X_1+\cdots+X_n}. \end{align} This is what I am taking to be $T=t(X_1,\ldots,X_n).$
(The maximum-likelihood estimator of $e^{-\lambda}$ is $e^{-(X_1+\cdots+X_n)/n},$ and I suspect that has a smaller mean-squared error than the unbiased estimator above, espectially when $n$ is small.)
Now the question is: What is $ \displaystyle \operatorname{var}\left( \left( \frac{n-1} n\right)^{X_1+\cdots+X_n} \right)\text{?}$
I.e. what is $ \displaystyle \operatorname{var}\left( \left( \frac{n-1} n\right)^Y \right)$ when $Y\sim\operatorname{Poisson}(n\lambda)\text{?}$
(Possibly to be continued later$\,\ldots\,$)
We have $Y\sim\operatorname{Poisson}(\mu)$ and we want $\operatorname{var}(a^Y)$ for a number $a$ between $0$ and $1.$ \begin{align} \operatorname{var}(a^Y) = {} & \operatorname E\left( \left( a^Y \right)^2\right) - \big(\operatorname E(a^Y) \big)^2 \\[8pt] = {} & \operatorname E\big((a^2)^Y\big) - \big( \operatorname E(a^Y) \big)^2 \end{align} The problem of finding $\operatorname E(a^Y)$ and that of finding $\operatorname E((a^2)^Y)$ are both the same problem; they just have two different numbers $a$ and $a^2,$ as the base.
\begin{align} \operatorname E(a^Y) = {} & \sum_{y=0}^\infty a^y \cdot \frac{\mu^y e^{-\mu}}{y!} \\[8pt] = {} & e^{-\mu} \sum_{y=0}^\infty \frac{(a\mu)^y}{y!} \\[8pt] = {} & e^{-\mu} e^{a\mu} = e^{\mu(a-1)}. \\[8pt] \text{Similarly } & \operatorname E((a^2)^Y) = e^{\mu(a^2-1)}. \end{align} So $\operatorname{var}(a^Y) = e^{\mu(a^2-1)} - \big( e^{\mu(a-1)} \big)^2. $ With $a= \dfrac{n-1}n$ and $\mu=n\lambda$ we have $$ \operatorname{var}(a^Y) = e^{n\lambda((n-1)/n)^2-1)} - e^{n\lambda((n-1)/n-1)}. $$