# Scikit-learn accuracy score

**Introduction:** In machine learning models accuracy plays an important role. Accuracy is a mirror of the effectiveness of our model. Not even this accuracy tells the percentage of correct predictions. It is just a mathematical term, Sklearn provides some function for it to use and get the accuracy of the model. accuracy_score, Classification_report, confusion_metrix are some of them. In this blog, we will understand the accuracy, the mathematical background of accuracy and how to predict it with hands-on code.

**Accuracy-score:** Accuracy score means how accurate our model is. Now, there are so many ways to find accuracy most popular ways are classification report and confusion matrix. The matrix is a 2X2 matrix which tells about correct and wrong predictions as the form of positive and negative. From here we can say that the accuracy will be the addition of all the truly positive and truly negative predictions divided by the addition of all the numbers in the matrix but first let us understand the matrix and how it works. The matrix has four columns as shown below:

Matrix = [truely_positive falsely_negative

falsely_positive truely_negative ]

**accuracy** = (truely_positive+truely_negative) / (truely_positive+truely_negative+falsely_positive+falsely_negative)

Here,

**truely_positive** = case was positive and the model predicted it positive

**truely_negative** = case was positive and the model predicted it negative

**falsely_negative** = case was negative but the model predicted it positive

**falsely_positive** = case was positive but the model predicted it negative

Now let us move to the coding part.

### accuracy score in Python with scikit-learn

from sklearn.metrics import classification_report print("For classification report:") print(classification_report(y_test , predictions)) from sklearn.metrics import confusion_matrix print("For confusion matrix") print(confusion_matrix(y_test , predictions))

The output will be:

For classification report:

precision recall f1-score support 0 0.74 0.87 0.80 167 1 0.70 0.48 0.57 100 micro avg 0.73 0.73 0.73 267 macro avg 0.72 0.68 0.68 267 weighted avg 0.72 0.73 0.71 267 For confusion matrix:

array([[146, 21],

**Conclusion: **

Accuracy score plays an important role. But the accuracy score is totally dependent on our model how our model works, how we cleaned the data and how we are applying the algorithm. So, these things matter more than accuracy score first we should always focus on these important things.

Also read: How to tune Hyperparameters with Python and scikit-learn

## Leave a Reply