[Link to Conference Program (University of Colorado)](https://www.colorado.edu/amath/sites/default/files/attached-files/framsc2024_3morning.pdf)
## Abstract:
We devise a framework for bridging the gap be- tween reinforcement learning in games or systems, and strategic knowledge extraction applying rein- forcement learning with genetic algorithmic mechanics to differential equations governing the behavior of agents. This builds upon previous work and differs in its application of genetically mutating equations to the reinforcement learning problem. Previous work in reinforcement learning of strategy lacks the ability to discover open-form equations with highly complex and nonlinear structures, while sparse previous work in genetically mutating equations is in applications to discovery of physical dynamics, but not to strategies or policies. This work draws from both in creating a framework to develop a generalized method for learning optimal strategy encoded as differential equations. We reinforcement learning to obtain optimal strategy dynamics directly from the rules or physics of the game or system rather than from a dataset. We explore results of the approach on simple theoretic games.
## Slides
![[slide_1.png]]
![[slide_2.png]]
![[slide_3.png]]
![[slide_4.png]]
![[slide_5.png]]
![[slide_6.png]]
![[slide_7.png]]
![[slide_8.png]]
![[slide_9.png]]
![[slide_10.png]]
![[slide_11.png]]
![[slide_12.png]]
![[slide_13.png]]
![[slide_14.png]]
![[slide_15.png]]
![[slide_16.png]]
![[slide_17.png]]