Activation Function For Neural Network

What is a Neural Network?

As we all know Human Neurons are one of the most sensible parts of the human body, it gives humans the ability to visualize, differentiate and determine something. In a similar way, Neural networks have been used to teach the machine/system the ability which human beings possess. Implementing neural network in the field of computer science we can create Artifical Intelligence. Let us discuss the features of Neural Network in brief…..

  • Input Layer: In the input layer of a neural network, the number of neurons present is the number of features of the particular data-set which will be trained by the neural network.
  • Hidden Layer: The hidden layer has basically no strict rule with its number of neurons, we use it to improve the performance of the model. The more hidden layers depict the good performance of the model.
  • Output Layer: The number of neurons in the output layer is equal to the number of output features in the data set.

What is Activation Function and Why should we use it?

Activation Function is used to activate the neurons of a particular layer. It helps us to determine whether a neuron will participate in weight calculation or not during the back-propagation process where we have to calculate the loss function.

We use it to increase the efficiency of a model and also to reduce the complexity of weight calculation during calculating the loss function.

Also read:

Different Type Of Activation Function :

  1. Sigmoid Function in Python:
            The equation of the function is : f(x)=1/(1+e^-x).
    It is a non-linear function, the value lies between 0 to 1.
    After the differentiation of the function, the value of this function lies between 0 to 0.25. The small change of x can make a large change in the function. We mainly use it for some binary classification problems.
  2. Tanh Function in Python:
       The equation of the function is : f(x)=((2*e^x)/(e^x+e^-x)) -1 .
    We also call it as tangent hyperbolic function. the value lies between 1 to -1. After differentiation, the value of this function becomes less than 1.
    The relation between tanh(f(x)) and sigmoid(g(x)) is :  f(x)=2*g(2*x)-1.
  3. Relu Function in Python:
     Rectified Linear Unit is the most important activation function used in the hidden layer neurons. Equation of the function is : f(x)=max(0,x). Range of the value of this function is : (0,inf).
    We use it to avoid vanishing gradient or exploratory gradient problem where we cannot reach to the global minima point.

    def relu(x):
        if(x>0):
          return x
        else:
          return 0

    It is also a non-linear activation function.

  4.  Leaky Relu Function in Python :
    It is same as relu function but in place of negative values, we put a small amount of value so that after the derivation of the function one part doesn’t get null. We use it to activate every neuron in each epoch at the time of back-propagation.
    Pseudocode will be like :

    def leaky_relu(x):
        if(x>0):
          return x
        else:
          return (small value)
  5. Softmax Function in Python:
        We generally use it when we have multi-classification problem or regression problems.
    It is also a non-linear activation function.
    Using this function we can compress the output between 0 to 1. We basically use it in the output layer as we can get different probabilistic values of different data combinations so that we can classify them easily.

 

General Tips :

If you are not familiar with the activation functions or don’t know how to use it. I can simply suggest you to go ahead with Relu Function and also for the output layer you can use softmax function.

But remember one thing for the binary classification problem, sigmoid would give us a better result.

Leave a Reply

Your email address will not be published. Required fields are marked *