# Gradient Ascent - To maximize loss function unlike [Gradient Descent](Gradient%20Descent.md) - Proportional to positive of gradient - $\theta_{t+1} = \theta{t} + \eta_t \Sigma_{n=1}^N(\nabla l_n(\theta_t))^T$