Let $y_t$ be the actual and $\hat{y}_t$ the forecast at time $t$, for times $t=1, \dots, T$.
You calculate the denominator used in the MASE as
$$ d := \frac{1}{T}\sum_{t=1}^T|\hat{y}_t-y_t|.$$
For a time point $\theta$, you calculate an Absolute Scaled Error as
$$ \text{ASE}_\theta = \frac{|\hat{y}_\theta-y_\theta|}{d} =
\frac{|\hat{y}_\theta-y_\theta|}{\frac{1}{T}\sum_{t=1}^T|\hat{y}_t-y_t|}.$$
Finally, you calculate the MASE as
$$ \text{MASE} = \frac{1}{T}\sum_{\theta=1}^T\text{ASE}_\theta=
\frac{1}{T}\sum_{\theta=1}^T\frac{|\hat{y}_\theta-y_\theta|}{\frac{1}{T}\sum_{t=1}^T|\hat{y}_t-y_t|} =
\frac{\frac{1}{T}\sum_{\theta=1}^T|\hat{y}_\theta-y_\theta|}{\frac{1}{T}\sum_{t=1}^T|\hat{y}_t-y_t|},$$
which is equal to $1$, because the numerator and the denominator are the exact same sum.
Your error lies in your calculation of $d$. You need to use the MAE of some specific benchmark, which you want to compare your forecast to. In the original formulation by Hyndman & Koehler (2006), they used the MAE of the naive one-step-ahead forecast in-sample for $d$, but other people use a seasonal naive method, or some other benchmark, evaluated in-sample or out-of sample.
Essentially, what you did was to use the exact forecast you were evaluating as the benchmark that yielded $d$. So it is not surprising that when you compare your benchmark to itself, you get $1$.
Finally, note that because $d$ is a fixed number, minimizing the MASE amounts to minimizing the MAE, which elicits the conditional median, which may or may not be what you want (Kolassa, 2020).