# Neural Network Explained-Know About Machine Learning

A neural network is a programming model that mimics the networks of neurons in your brain. It processes and organizes data to produce output. During training, the **neural network** makes a prediction, evaluates how accurate it is using a loss function, and adjusts its internal weight matrixes based on an optimization algorithm to minimize this loss. This process is known as backpropagation.

## Input Layer

The input layer transforms raw data from various sources into a format that can be understood by the rest of the neural network. This step is critical to providing a framework for the rest of the model to perform its functions, such as feature extraction or pattern recognition.

In the input layer, each data item is assigned a weight. When a node receives input, it multiplies each data item by its weight and sums the results. If the resulting number is above a threshold value, that node “fires” and passes the information to the next layer.

The inputs are then passed through an activation function. This is a mathematical function that turns the inputs into outputs, such as probability values. The most common activation functions used are the sigmoid and hyperbolic tangent. These functions turn the sum of the inputs into a probability value, and this is what’s sent to the output layer. This output is then used to predict something, such as the probability of hospital admission.

## Hidden Layer

The hidden layer performs computations and transfers information from the input nodes to the output nodes. During training, the hidden layer’s weights and biases are adjusted to minimize the error between predicted and actual output values. This process is known as backpropagation.

Each neuron in the hidden layer receives all of the inputs from the previous layer, multiplies them by their weights and a bias, and passes them through an activation function. The output of this calculation is then used as input to the next neuron.

If the output of this neuron exceeds a threshold, it passes on this data to the next neuron in the layer. This process continues until all the neurons in the layer have been activated and the outputs of all of the neurons are combined. The result is the final output for the neural network.

## Output Layer

If we have three input values (yellow circles), two hidden layers of four neurons each and one output layer, then we will have a total of (3 * 4 + 4) = 20 + 6 = 26 weights. The final output value of the demo neural network is computed by multiplying the product of the values in each of these nodes, adding a bias value and finally applying nonlinearity to get the desired result.

The output of each neuron will then be compared to the threshold function and, if it is above it, it will pass on the data to the next layer. In turn, each of these subsequent processing nodes will also pass on their results to the next tier in a continuous process that is sometimes called feedforward. The input signals that enter each node are multiplied by their respective weights, which determine the importance of any given variable and summed up. Then, they are passed through the activation function, which computes a different output signal depending on whether it lies above or below a threshold.

## Training

The training layer of a neural network uses mathematical functions to calculate values for weights and thresholds. This process aims to minimize the impact of changes to individual parameters on overall performance.

Each node in the training layer receives input data, multiplies it by an associated weight and compares this sum with a threshold value. If the result passes this threshold, that node will send its data to the next layer of the network.

This data is analyzed by the hidden layers that make up the rest of the neural network. The final output will then be retrieved from the output layer.

Recurrent neural networks can be used for pattern recognition, classification and clustering – and even predictive analytics (i.e. forecasting sales or stock market trends). They use a strong feedback loop that helps them learn from their mistakes and strive for accurate results. This feature makes them popular in text-to-speech applications and in making financial predictions.