Python Cheat Sheet: Measuring Prediction Errors in Time Series Forecasting

Measuring prediction errors in time series forecasting

Measuring prediction errors is an important step in the process of developing a predictive machine learning model. In time series forecasting, model performance is typically measured with different error metrics, each of which having own advantages and disadvantages. Therefore, the different metrics are typically used in combination. This blog post presents a cheat sheet containing six of the most common error metrics used in time series forecasting. The cheat sheet contains the mathematical formula for these metrics, a short code sequence to implement them in Python, and some hints for their interpretation.

Metrics for measuring prediction errors

The performance of time series forecasting models is measures by the deviations between the predictions (y_pred) and the actual values (y_test). If the prediction is below the actual value, the prediction error is positive. If the prediction lies above the actual value, the prediction error is negative.

Time Series Forecasting - Measuring Prediction Errors
Predictions vs Actual Values in Time Series Forecasting

The following six metrics that are commonly used to measure prediction errors of time series forecasting models:

  • Mean Absolute Error (MAE)
  • Mean Absolute Percentage Error (MAPE)
  • Median Absolute Error (MedAE)
  • Mean Squared Error (MSE)
  • Root Mean Squared Error (RMSE)
  • Median Absolute Percent Error (MdAPE)

You may wonder why there are multiple error metrics. The reason is that each metric by itself can only cover a part of the overall picture. For instance, imagine you have developed a model to predict the consumption of a power plant. The predictions of the model are generally accurate, but in few cases the predictions are very wrong. In other words, there outliers among the prediction errors. To understand this situation, it is not sufficient to calculate the average prediction error. In order to identify outliers, you will need to combine different error metrics. The following section introduces the six error metrics.

Mean Absolute Error (MAE)

Mean Absolute Error (MAE) is a metric that is commonly used to measure the arithmetic average of deviations between predictions and actual values. An MAE of “5” tells us that on average our predictions deviate from the actual values by 5. Whether this error is considered small or large will depend on the application case and the scale of the predictions. For instance, 5 nanometers in the case of a building might be small, but if it’s five nanometers in the case of a biological membrane, it might be large. So when working with the MAE, mind the scale.

  • Scale-dependent
  • Since it is calculated on absolute values, positive and negative deviations from the actual value are taken into account equally in the calculation.
  • The MAE is sensitive to outliers, as large values can have a strong impact. For this reason, the MAE should be used in combination with additional metrics.
  • The MAE shares the same unit with the predictions.
Formula of the MAE
Formula of the MAE

Mean Absolute Percentage Error (MAPE)

The mean absolute percentage error calculates the mean percentage deviation between predictions and actual values.

  • The mean absolute percentage error is scale independent, making it easier to interpret.
  • Not to be used whenever a single value is zero
  • Puts a heavier penalty on negative errors
Formula of the MAPE
Formula of the MAPE

Median Absolute Error (MedAE)

The Median Absolute Error (MedAE) calculates the median deviation between predictions and actual values.

  • The MedAE has the same unit as the predictions.
  • A MedAE of value 10 means that 50% of the errors are greater than 10 and 50% of the errors are smaller.
  • The MedAE is resistant to outliers. It is therefore often used in combination with the MAE. A strong deviation between MAE and MedAE is an indication that there are outliers in the errors where the prediction model deviates more from the actual value than on average.

Formula of the MedAE
Formula of the MedAE

Mean Squared Error (MSE)

We can calculate the MSE by measuring the average squares of the differences between the estimated values and the actual values. The formula of the MSE is:

  • Since all values are squared, the MSE is very sensitive to outliers.
  • An MSE that is much larger than the MAE indicates strong outliers among the prediction errors.

Formula of the MSE
Formula of the MSE

Root Mean Squared Error (RMSE)

The root mean squared error is another standard way to measure the performance of a forecasting model.

  • Has the same unit as the predictions
  • Good measure of how accurately the model predicts the response
  • Robust to outliers

Formula of the RMSE
Formula of the RMSE

Median Absolute Percentage Error (MdAPE)

  • Scale-dependent
  • Not to be used whenever a single value is zero
  • More robust to distortion from outliers than the MAPE
Formula of the MdAPE
Formula of the MdAPE

Summary

This post has presented an error metrics cheat sheet with six metrics for measuring prediction errors in time series forecasting. In addition, this post has presented each error metric along with the particularities of their application.

Leave a comment if you have remaining questions or want remarks.

I have created another blog post, in which I demonstrate how to use the metrics.

Author

  • Hi, my name is Florian! I am a Zurich-based Data Scientist with a passion for Artificial Intelligence and Machine Learning. After completing my PhD in Business Informatics at the University of Bremen, I started working as a Machine Learning Consultant for the swiss consulting firm ipt. When I'm not working on use cases for our clients, I work on own analytics projects and report on them in this blog.

Follow Florian Müller:

Data Scientist & Machine Learning Consultant

Hi, my name is Florian! I am a Zurich-based Data Scientist with a passion for Artificial Intelligence and Machine Learning. After completing my PhD in Business Informatics at the University of Bremen, I started working as a Machine Learning Consultant for the swiss consulting firm ipt. When I'm not working on use cases for our clients, I work on own analytics projects and report on them in this blog.

Leave a Reply