#writing
What happens if everyone has perfect knowledge and makes perfect decisions?
1. The importance of classification
2. Increased efficiency of decision making in pursuit of goals will only increase the weight that luck has in outcomes. As the alpha from skill declines luck plays a larger role and skill becomes a rarified commodity held by fewer people. Inefficiency can be stabilizing and promote equality.
3. Highly efficient individual behavior can lead to emergent system-level complexity
4. Overly strong predictive models can distort our perception of the world
>We choose theories that lead to good predictions, or rules that lead to good outcomes
>-*Alison Gopnik*, The Philosophical Baby
---
**December 2, 2024**
In [[CHG Issue 172 Classification in a Quantum World]] we tried to discuss the importance of [[Classification]] in a rapidly changing and abstract world. Daniel Kahneman refers to humans as "prediction machines" and won the Nobel Prize for his work on the cognitive biases that fuel our predictions and their resulting decisions. Many have followed in his footsteps cataloguing the many ways that we are hard-wired to bend the world to fit a certain predictive model. It is widely accepted today that our perception of reality is subjective, a function of our beliefs instead of any absolute truth.
In this relativistic mindset classification becomes extremely important by providing the classes that we end up comparing. When we model something, we specify the probabilistic relationship between different variables or classes. Is this email spam? Is a credit card transaction fraudulent? Which football team will win the Super Bowl? **Classification allows us to employ our beliefs more efficiently by reducing [[uncertainty]] and allowing us to move quickly towards our goals.**
Social coordination is based on classes, our beliefs about those classes, and is ultimately an attempt to reduce uncertainty. Capitalism is a form of social coordination which gives us greater confidence in other peoples' behavior based on the principals of voluntary exchange, a price mechanism that communicates value, a competitive market, private property, and the value of time or wage labor. This coordination allows for complex interactions within the economy which scales commerce in ways we could have never imagined on our own. However, since this optimization was never planned and only _emerged_ due to increased coordination it is inherently difficult to understand, and people find it difficult to trust things they don't understand.
>As the number of diverse actors whose behavior affects your economic opportunities, your political liberties, and your cultural values increases, the complexity of ‘your’ network increases–and, it becomes harder to reason about the world and make decisions towards achieving your goals in a larger, more connected world. As the scale and complexity of the global capitalist system grows, more and more often events opaque to each individual (housing market failures, changes in labor supply on another continent) will both impact outcomes for each individual and undermine each individual’s confidence in their own agency, weakening the predictive model for everyone.
>_[Andrew Cortina](https://kortina.nyc/essays/social-systems-are-computations-that-minimize-uncertainty/)_
The advent of [[artificial intelligence]] adds fuel to this fire by making our decision making even more efficient. We have seen a boom in the stock market thanks to AI and the investment dollars that are flocking towards this new industry while we have heard warnings from many within the industry about the dangers of AI. We have already seen how highly efficient and optimized behaviors can result in emergent complexity and unpredictability, but it’s a complex idea best explored by way of a poker story.
When I first started playing poker, I was in the minority of people who knew the odds and that gave me an edge. As the game has grown in popularity you can flip on ESPN now and see the odds right there on your TV screen. Game Theory Optimal (GTO) play is now commonly known in the poker world and players today may purposely play suboptimal strategies to throw their opponents off. The increased knowledge and efficiency of decision making hasn’t eliminated edge or complexity, it has just transformed it; in fact, I would argue the game is far more complex today than it was twenty years ago. Which brings up an interesting question: **Does the increase in overall skill of poker players reduce the role that skill plays in the outcome of a game and increase the role that luck plays?**
Basically, what happens is as we remove errors from the game it becomes more random and unpredictable. Whereas before a tourist might sit down at your table and all the sharks would take their money, but if there are only sharks out there today the outcomes become far less predictable. This implies that error can be stabilizing for a system. Think of the certainty of a favorable P&L that the tourists provide the sharks. What happens if that is completely removed from the game?
If we extrapolate that experience to a future where different LLM powered AI agents replace human players, we can see the limits of statistical optimization techniques. In that hypothetical game each robot can calculate the exact odds and make a perfect decision, in effect reducing all error and edge from the game and making the outcomes linearly derived from the random outcomes of the cards. **By removing human error, we increase uncertainty because randomness is not predictable.** Except for the fact that the robots can also count cards and can predict outcomes with more precision but again there is no unique insight there, it’s just another form of removing error, i.e. players who can’t count cards. So, the uncertainty of the game diminishes as the deck is played out in a game like Blackjack, but not in Poker where the deck is reshuffled after every hand.
We (me included very much so) have come to hold up efficiency as an idol. But if we all were efficient decision makers where would that leave us? If there was no alpha from being smarter or a better decision maker would that mean we have achieved equality? Unfortunately, as our access to data, information, and knowledge has increased we have seen society become more tribal.
>The tribalist strategy for reducing complexity and uncertainty in a network of seven billion independent actors is to categorize these actors into two groups: “us” and “them.” There are a billion ways to draw the line (citizen vs foreigner, liberal vs conservative, man vs woman, black vs white, straight vs lgbt, etc), but no matter which way you draw the line, the strategy is the same: you can’t ever hope to develop precise models for predicting the behavior of each individual (perhaps any single individual can hold up to [150](https://en.wikipedia.org/wiki/Dunbar%27s_number) precise models for other individuals in their brain?), but you can develop a pretty accurate model for predicting your own behavior. By dividing the world into two groups, those that share your very specific model and those that use an effectively opposite model, you can make very specific predictions about every other actor in the world.
>_[Andrew Cortina](https://kortina.nyc/essays/social-systems-are-computations-that-minimize-uncertainty/)_
Where capitalism gives us a high confidence, low precision model for human behavior, tribalism gives us a low confidence, but high precision model. Your predictions within your tribe are going to be very good, however overly strong prediction models can distort our perception of the world. Overfitting is the bane of all data scientists and refers to a model that works well in sample but not as good out of sample. What happens is that we have a very strong model that works well within our tribe but doesn't work well outside of it and that creates a breeding ground for anger, resentment, hate, and racism when things don’t work out the way we think they should.
What we are seeing here is that [[entropy]] always wins. No matter how much we try to reduce uncertainty and disorder, it is a natural feature of our world and our effort to reduce it will only lead to some new form of it emerging. Anthropic recently released their new Claude Sonnet 3.5 which includes a new accessibility via the API: [computer use](https://www.youtube.com/watch?v=vH2f7cjXjKI&feature=youtu.be&themeRefresh=1). This means that you can now have an AI use the computer for you. The hope is once again for increased efficiency, but each increasing layer of efficiency hides a growing mass of [[complexity]] beneath the surface that fewer and fewer people understand. Round and round we go…