# sklearn.metrics.mean_absolute_error in Python

This article is about calculating Mean Absolute Error (MAE) using the scikit-learn library’s function sklearn.metrics.mean_absolute_error in Python.

Firstly, let’s start by defining MAE and why and where we use it. MAE is used to find the difference between two paired observation sets taken under consideration. We use MAE to find out how much an observation set differs from the other paired observation set. So, for this article, we are going to use MAE to measure errors between our predicted and observed values of labels. For that, we are going to use sklearn.metrics.mean_absolute_error in Python.

Mathematically, we formulate MAE as:

MAE = sum(yi – xi)/n ; n = number of instances of each observation set

In other words, MAE is an arithmetic average of absolute errors between two sets of observation

Suppose in your Linear Regression task, you calculate predicted “y_pred” by fitting your dataset in a Linear Regression model. Then, it would be best if you had a means of measuring the performance of your model. Let’s use MAE to check the errors between the two observation sets.

For that, we require scikit-learn library installed on our system. Use the following command in your terminal or command prompt to install scikit learn.

pip install scikit-learn

Then in your Python file, execute this line to check if it’s installed properly.

from sklearn.metrics import mean_absolute_error

For the sake of example, let’s consider two iterables as our test label and predicted label i.e., y_test and y_pred, respectively. Here, we obtain y_test by splitting the dataset into test and training sets. We obtain y_pred from our Linear Regression model.

y_true = [3, -0.5, 2, 7] y_pred = [2.5, 0.0, 2, 8]

We use the imported function mean_absolute_error to find MAE.

MAE = mean_absolute_error(y_true, y_pred) print(MAE)

Output:

0.5

Further reading:

## Leave a Reply