Neural Networks are a fundamental concept in **Deep Learning**, inspired by the structure of the human brain. They consist of interconnected **neurons (nodes)** organized into layers:
![[Brain.webp]]
- **Input Layer** – Takes raw data.
- **Hidden Layers** – Processes data using weighted connections and activation functions.
- **Output Layer** – Produces the final prediction.
Neural networks learn through **forward propagation** (data flow) and **backpropagation** (error correction). They are widely used in **image recognition, speech processing, and natural language understanding**.
### Forward Propagation
- Data **flows forward** through the network, from input to output.
- Each neuron applies **Weights, Biases, and an [[Activation Function]]** to compute outputs.
- The final layer produces a **prediction**.
- **Forward propagation** makes predictions.
🔸 **Steps in Forward Propagation:**
1. Multiply inputs by weights and add biases.
2. Apply an [[Activation Function]].
3. Pass the output to the next layer until the final prediction is made.
### Backward Propagation ( Back propagation )
- Adjusts the **weights and biases** based on errors.
- Uses **Gradient Descent** to minimize the difference between prediction and actual output.
- **Backward propagation** improves accuracy by updating weights.
🔸 **Steps in Back propagation:**
1. Compute the **loss** (error) between prediction and actual value.
2. Calculate how much each weight contributed to the error (gradient).
3. Adjust weights using **Gradient Descent** (or other optimization techniques).
4. Repeat until the model converges (low error).
![[propagation.png]]