# Rescorla-Wagner Algorithm - Rescorla & Wagner (1972): animals and humans also learn associations by paying [Attention](Attention.md) to what is not associated. - $\Delta V = \alpha \beta_{1}\beta_{2}(\lambda = \Sigma V)$ - ▶ V = association strength ▶ ∆ V : Change in association strength ▶ λ = maximum values of the unconditional stimulus ▶ Set to 1: when US is present (food) ▶ Set to 0: when not present ▶ α = learning rate ▶ β = varies the effects of negative or positive evidence ▶ ΣV = sum of associated strengths for all cues/[Features](Features.md)/conditions stimuli - negative instances are also useful to learning - Logical Problem of Lang Acquisition - Children don't get negative evidence = must be innate - [Cross-situational learning](Cross-situational%20learning.md) - [Propose-but-verify](Propose-but-verify.md) - [Rescorla-Wagner Blocking](Rescorla-Wagner%20Blocking.md) - Rescorla-Wagner = error-driven - After a strong association is made, as long as it is confirmed by data, no new learning will occur - The model only learns when the predicted outcome differs from actual outcome