Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

Cancel
6
  • 19
    $\begingroup$ Because RMSE and MAE are two different measures of error, a numerical comparison between them (which is involved in asserting that MAE is "lower" than RMSE) does not seem meaningful. That line must have been fit according to some criterion: that criterion, whatever it is, must be the relevant measure of error. $\endgroup$ Commented Jan 22, 2013 at 18:33
  • $\begingroup$ the line was fitted using least squares - but the pic is just an example to show the difference in measured error. My real issue is in using an optimiser to solve for four function parameters to some measure of minimised error, MAE or RMSE. $\endgroup$ Commented Jan 22, 2013 at 18:47
  • $\begingroup$ Thank you for the clarification. But what error are you interested in, precisely? The error in the fit or the errors in the parameter estimates? $\endgroup$ Commented Jan 22, 2013 at 18:48
  • 1
    $\begingroup$ The error in the fit. I have some lab samples that give y, which I want to predict using a function. I optimise the function for 4 exponents by minimising the error for the fit between the observed and predicted data. $\endgroup$ Commented Jan 22, 2013 at 18:57
  • $\begingroup$ In RMSE we consider the root of number of items (n). That is root of MSE divided by root of n. Root of MSE is ok, but rather than dividing by n it is divided by root of n to receive RMSE. I am feeling that it would be a policy. Reality would be (Root of MSE)/n. In that way MAE is better. $\endgroup$ Commented Mar 8, 2013 at 0:11