Common Metrics to Measure Regression Errors

Six Common Error Metrics for Measuring Regression Errors in Machine Learning

This article presents six error metrics that are commonly used to measure regression errors in machine learning. Measuring errors is an important step in developing a predictive model and the basis for evaluating a model’s performance. However, a universal error metric does not exist. Instead, there are several error metrics, each with its own advantages and disadvantages. Therefore, we typically use them in combination. This article seeks to provide an overview of these metrics and explains how to use them.

About Regression Errors

In general, we measure the performance of regression models by calculating the deviations between the predictions (y_pred) and the actual values (y_test). If the prediction value is below the actual value, the prediction error is positive. If the prediction lies above the actual value, the prediction error is negative.

Time Series Forecasting - Measuring Prediction Errors
Predictions vs Actual Values in Time Series Forecasting

When testing a model, the goal is typically to get a realistic impression of how far predictions deviate from reality (actual values). However, in a sample of predictions, the errors can vary greatly depending on the data point. Therefore, it is not enough to look at individual error values. This is where error metrics come into play. They inform us about the statistical distribution of errors in a prediction sample.

Six Common Error Metrics for Measuring Regression Errors

The following six metrics are commonly used to measure prediction errors. We can use them to measure the prediction errors of various regression problems, including time series forecasting.

  • Mean Absolute Error (MAE)
  • Mean Absolute Percentage Error (MAPE)
  • Median Absolute Error (MedAE)
  • Mean Squared Error (MSE)
  • Root Mean Squared Error (RMSE)
  • Median Absolute Percent Error (MdAPE)

You may wonder why there are multiple error metrics. The reason is that each metric by itself can only cover a part of the overall picture. For instance, imagine you have developed a model to predict the consumption of a power plant. The predictions of the model are generally accurate, but in few cases, the predictions are very wrong. In other words, outliers among the prediction errors make it difficult to conclude the model performance. To understand this situation, it is not sufficient to calculate the average prediction error. Instead, a more robust approach would combine different error metrics that enable to conclude on the probability that prediction errors lie within a specific range. The following section introduces the six error metrics.

Mean Absolute Error (MAE)

Mean Absolute Error (MAE) is a metric that is commonly used to measure the arithmetic average of deviations between predictions and actual values. An MAE of “5” tells us that on average our predictions deviate from the actual values by 5. Whether this error is considered small or large will depend on the application case and the scale of the predictions. For instance, 5 nanometers in the case of a building might be small, but if it’s five nanometers in the case of a biological membrane, it might be large. So when working with the MAE, mind the scale.

  • It is scale-dependent
  • The MAE considers the absolute values so that it takes both positive and negative deviations from the actual equally into account.
  • The MAE is sensitive to outliers, as large values can have a strong impact. For this reason, the MAE should be used in combination with additional metrics.
  • The MAE shares the same unit with the predictions.
Formula of the MAE
Formula of the MAE

Mean Absolute Percentage Error (MAPE)

The mean absolute percentage error calculates the mean percentage deviation between predictions and actual values.

  • The mean absolute percentage error is scale-independent, making it easier to interpret.
  • It must not to be used whenever a single value is zero
  • The MAPE puts a heavier penalty on negative errors
Formula of the MAPE
Formula of the MAPE

Median Absolute Error (MedAE)

The Median Absolute Error (MedAE) calculates the median deviation between predictions and actual values.

  • The MedAE has the same unit as the predictions.
  • A MedAE of value 10 means that 50% of the errors are greater than 10 and 50% of the errors are smaller.
  • The MedAE is resistant to outliers. Therefore, we often use it in combination with the MAE. A strong deviation between MAE and MedAE is an indication that there are outliers among the errors. In other words, the prediction model deviates more from the actual value than on average.

Formula of the MedAE
Formula of the MedAE

Mean Squared Error (MSE)

We can calculate the MSE by measuring the average squares of the differences between the estimated values and the actual values. The formula of the MSE is:

  • Since all values are squared, the MSE is very sensitive to outliers.
  • An MSE that is much larger than the MAE indicates strong outliers among the prediction errors.

Formula of the MSE
Formula of the MSE

Root Mean Squared Error (RMSE)

The root-mean-squared error is another standard way to measure the performance of a forecasting model.

  • Has the same unit as the predictions
  • A good measure of how accurately the model predicts the response
  • Robust to outliers

Formula of the RMSE
Formula of the RMSE

Median Absolute Percentage Error (MdAPE)

  • Scale-dependent
  • Not to be used whenever a single value is zero
  • More robust to distortion from outliers than the MAPE
Formula of the MdAPE
Formula of the MdAPE

Regression Error Cheat Sheet

Under the link below, you can download a Cheat Sheet that provides an overview of the six common regression error metrics. It contains the mathematical formula for each regression metric, a short code sequence to implement them in Python, and some hints for their interpretation.

Python regression cheat sheet

Summary

In this article, we have briefly discussed six regression error metrics. In addition, this post has presented each error metric along with the particularities of their application. We can apply these metrics to all types of regression problems, including time series forecasting.

If you like the content or have remaining questions, please let me know in the comments.

Author

  • Hi, I am Florian, a Zurich-based consultant for AI and Data. Since the completion of my Ph.D. in 2017, I have been working on the design and implementation of ML use cases in the Swiss financial sector. I started this blog in 2020 with the goal in mind to share my experiences and create a place where you can find key concepts of machine learning and materials that will allow you to kick-start your own Python projects.

Leave a Reply