> [!NOTE] > The goal of this note is to discuss a difference between the way I do my research and the way other researchers do. See [[How my approach is different]] for more differences. Much of the impressive AI research out there sets out to create an AI that's as smart as possible, and that can solve meaningful (or at least difficult) problems for us. GPT-3 and Lamda can (almost) solve the problem of having a conversation with human beings. GANs can generate faces that look like real people. AlphaGo can beat the world champion at Go. Protein folding something something. In these researches, I imagine the AI to be a genius or an oracle. The researchers try to make it more and more capable to make good decisions. One problem is that when it makes bad decisions, it can be difficult to track down what went wrong and figure out how to fix it. My research is different. I'm not trying to make the agents smart, or to get them to solve meaningful problems. I want them to be dumb, and to solve unimportant problems. The RL agents aren't geniuses, they're rats running around in a maize. I'm gaining insights by watching the ways that they interact with each other. It's possible that if we get multiple agents to cooperate and work on a meaningful task for us, we'll get better performance, in the same way that a team of people can do things that a single genius person can't. I don't know whether we'll succeed, but I hope so.