Loss functions in Machine Learning

Loss functions are a crucial part of the machine learning pipeline but knowing which one to use in the artificial neural network could be kind of confusing. In this tutorial, there will be explained the working of loss functions and how to use them. They are easy to understand and how much it is useful in machine learning and deep learning is pretty interesting.
What are Loss Functions?
The loss function is a simple mathematical function that continuously attempting to minimize the error of any neural network by updating the value of weights and the model during training. During the training process at the end of each epoche, the loss will be calculated on the model’s prediction.
What actually doing Loss Functions during training?
So basically what is happening is that the model calculates the error at each input by looking at what output is predicted for that input and taking the difference of that output values and correct label for that input.
Simple Example
If a model is able to classify the images as cat or dog then it says a label for a cat is 0 and for and a label for a dog is 1. Then we pass an image of a cat through that model and if it gives the outputĀ 0.25 then the error for this output and the true label for this image would be 0.25 – 0 (which is the label of cat) is 0.25. So it does this process for every input then at the end of each epoche it will change all of the individual errors for each input and then someway it passes them to the loss function.
Variety of Loss Functions
- Regression Losses
- Mean Square Error Loss or L2 Loss
- Mean Absolute Error or L1 Loss
- Mean Bias Error
- Classification Losses
- Hinge Loss or Multi-Class SVM Loss
- Cross-Entropy
Implementation of Loss Functions
Here is a simple example of PyTorch implementation of the Cross-Entropy loss function in a neural network in the forward function.
import torch import torch.nn as nn import torch.nn.functional as F def forward(self, input, target): return F.cross_entropy(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
Leave a Reply