# Recurring Neural Networks ## Instructions useful in cases where you have different sizes of sequential data. for example stock prices where companies joined the market at different times (so each will have different num of rows of "trading days" in the market). the network works by: 1. start from the oldest data, and calculate the output of the activation function 2. insert that value in the "sum" step before the activation function for the next observation (so its activation function uses both inputs) 3. repeat steps 1+2 (also called - *feedback loop*) up to the most recent data point, and use the final output to generate a prediction for the next (unforeseen) observation The RNN uses the same weights and biases per step, which means it doesn't differentiate observations (all run through the same calculation, with the same weights and biases). Also - this means that no matter how many times you *unroll* a dataset (how many "days" of observations), you still have to train the same amount of weights. RNNs are not often used because of the (Jump:: [[Exploding or vanishing gradient problem]]) ## Overview 🔼Topic:: [[Neural Networks]] ◀Origin:: [[StatQuest]] 🔗Link:: https://www.youtube.com/embed/AsNTP8Kwu80 <iframe width="560" height="315" src=https://www.youtube.com/embed/AsNTP8Kwu80 title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>