Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

Cancel
9
  • $\begingroup$ Hint: yes, if there is only a single $y$ in the terminal leaf, then of course the optimal prediction is equal to that value: $\hat{y}=y$. However, we will usually have multiple training samples $y_1, \dots, y_n$ in the terminal leaf, which we want to summarize using a single prediction value $\hat{y}$. Which will minimize the loss? (This will depend on how the loss is summarized over multiple training observations; you can probably assume it's averaged. Also, let's hope very much that none of the $y_i=0$.) $\endgroup$ Commented Dec 13, 2021 at 7:38
  • $\begingroup$ Incidentally, that the answer will probably not be the mean of the training observations is related to one issue with the Mean Absolute Percentage Error (MAPE), which is quite similar to your loss function (just without the squaring). $\endgroup$ Commented Dec 13, 2021 at 7:52
  • $\begingroup$ Thx @StephanKolassa. Having read the article, you mean that the problem is that my loss function is not differentiable for $y_i = \hat{y_i}$? $\endgroup$ Commented Dec 13, 2021 at 8:20
  • $\begingroup$ No, your loss is quite nicely differentiable, in contrast to the MAPE. But note that there is no subscript $i$ for your prediction $\hat{y}$. You have multiple training instances $y_1, \dots, y_n$ in your leaf and want to summarize them with a single prediction $\hat{y}$. Just set up the loss function, averaging over the training samples, and differentiate. $\endgroup$ Commented Dec 13, 2021 at 8:26
  • $\begingroup$ @Stephan Kolassa: What do you mean by "averaging over the training samples"? If I just differentiate the sum of my loss function with respect to $\hat{y}$, I get exactly $\hat{y} = \sum\limits_{i = 1}^n y_i/n$ $\endgroup$ Commented Dec 13, 2021 at 8:32