The mean absolute scaled error has the following desirable properties:3
For a non-seasonal time series,9 the mean absolute scaled error is estimated by
where the numerator ej is the forecast error for a given period (with J, the number of forecasts), defined as the actual value (Yj) minus the forecast value (Fj) for that period: ej = Yj − Fj, and the denominator is the mean absolute error of the one-step "naive forecast method" on the training set (here defined as t = 1..T),11 which uses the actual value from the prior period as the forecast: Ft = Yt−112
For a seasonal time series, the mean absolute scaled error is estimated in a manner similar to the method for non-seasonal time series:
M A S E = m e a n ( | e j | 1 T − m ∑ t = m + 1 T | Y t − Y t − m | ) = 1 J ∑ j | e j | 1 T − m ∑ t = m + 1 T | Y t − Y t − m | {\displaystyle \mathrm {MASE} =\mathrm {mean} \left({\frac {\left|e_{j}\right|}{{\frac {1}{T-m}}\sum _{t=m+1}^{T}\left|Y_{t}-Y_{t-m}\right|}}\right)={\frac {{\frac {1}{J}}\sum _{j}\left|e_{j}\right|}{{\frac {1}{T-m}}\sum _{t=m+1}^{T}\left|Y_{t}-Y_{t-m}\right|}}} 13
The main difference with the method for non-seasonal time series, is that the denominator is the mean absolute error of the one-step "seasonal naive forecast method" on the training set,14 which uses the actual value from the prior season as the forecast: Ft = Yt−m,15 where m is the seasonal period.
This scale-free error metric "can be used to compare forecast methods on a single series and also to compare forecast accuracy between series. This metric is well suited to intermittent-demand series (a data set containing a large amount of zeros) because it never gives infinite or undefined values16 except in the irrelevant case where all historical data are equal.17
When comparing forecasting methods, the method with the lowest MASE is the preferred method.
For non-time series data, the mean of the data ( Y ¯ {\displaystyle {\bar {Y}}} ) can be used as the "base" forecast.18
In this case the MASE is the Mean absolute error divided by the Mean Absolute Deviation.
Hyndman, R. J. (2006). "Another look at measures of forecast accuracy", FORESIGHT Issue 4 June 2006, pg46 [1] http://robjhyndman.com/papers/foresight.pdf ↩
Franses, Philip Hans (2016-01-01). "A note on the Mean Absolute Scaled Error". International Journal of Forecasting. 32 (1): 20–22. doi:10.1016/j.ijforecast.2015.03.008. hdl:1765/78815. http://repub.eur.nl/pub/78815 ↩
Hyndman, R. J. and Koehler A. B. (2006). "Another look at measures of forecast accuracy." International Journal of Forecasting volume 22 issue 4, pages 679-688. doi:10.1016/j.ijforecast.2006.03.001 /wiki/Doi_(identifier) ↩
Makridakis, Spyros (1993-12-01). "Accuracy measures: theoretical and practical concerns". International Journal of Forecasting. 9 (4): 527–529. doi:10.1016/0169-2070(93)90079-3. S2CID 153403127. /wiki/Doi_(identifier) ↩
Diebold, Francis X.; Mariano, Roberto S. (1995). "Comparing predictive accuracy". Journal of Business and Economic Statistics. 13 (3): 253–263. doi:10.1080/07350015.1995.10524599. /wiki/Doi_(identifier) ↩
Diebold, Francis X.; Mariano, Roberto S. (2002). "Comparing predictive accuracy" (PDF). Journal of Business and Economic Statistics. 20 (1): 134–144. doi:10.1198/073500102753410444. S2CID 12090811. http://www.nber.org/papers/t0169.pdf ↩
Diebold, Francis X. (2015). "Comparing predictive accuracy, twenty years later: A personal perspective on the use and abuse of Diebold–Mariano tests" (PDF). Journal of Business and Economic Statistics. 33 (1): 1. doi:10.1080/07350015.2014.983236. http://www.nber.org/papers/w18391.pdf ↩
"2.5 Evaluating forecast accuracy | OTexts". www.otexts.org. Retrieved 2016-05-15. https://www.otexts.org/fpp/2/5 ↩
Hyndman, Rob et al, Forecasting with Exponential Smoothing: The State Space Approach, Berlin: Springer-Verlag, 2008. ISBN 978-3-540-71916-8. /wiki/ISBN_(identifier) ↩
Hyndman, Rob. "Alternative to MAPE when the data is not a time series". Cross Validated. Retrieved 2022-10-11. https://stats.stackexchange.com/q/108963 ↩