# The Great Mental Models Volume 1
## ✒️ Note-Making
### 🔗Connect
⬆️Topic:: [[Mental Shortcuts]]
⬆️Topic:: [[Decision Making (MOC)]]
### 💡Clarify
🔈 *Summary of main ideas*
1. **Better thinkers have less blind spots** - decision making is often a loser's game, by not making mistakes we can drastically improve our wellbeing. We make a lot of mistakes because we can't know everything, and learning from experience is expensive and slow.
2. **Expand your mental models toolkit** - don't reinvent the wheel. Learn from others mistakes and wisdom. By having efficient proven mental models at our disposal, we are better at understanding reality in a non biased way
### 🗒️Relate
⛓ *by following this method, what will happen? What is the goal of this book?*
You will make fewer mistakes and have less blind spots while making decisions because you could use proven efficient mental models in your decision making toolkit.
### 🔍Critique
✅ *relevant research, metaphors or examples that helps to convey the argument*
❌ *the logical jumps, holes or simply cases where it is wrong...*
🧱 *Implementations and limitations of it are...*
There is little guidance on how and when to pick each model.
### 🗨️Review
💭 *my opinions on the book, the writers style...*
The book is interesting, on the one hand it lacks the grand argument of regular non fiction books that focus on a single topic, yet on the other hand it is a collection of good mental models that can be used in various situations.
I would recommend it as a starter book for people who just got into the world of decision making.
### 🖼️Outline
## 📒 Notes
### Introduction: Acquiring Wisdom
Being a good decision maker can be similar to a (Reminds:: [[winner or loser game|losers game]]), the fewer blind spots we have, the better decision we can make because increased knowledge reduces (Jump:: [[Uncertainty]]).
Since we can't know everything, and we won't have time to know everything, it is useful to have mental models, which serve as a (Jump:: [[Effort Storing]]). They are (Jump:: [[medium independent]]) ideas that are true in many contexts. For example, ideas from physics are relevant for social interactions as well (Aka:: [[Transferred Learning]]).
The more mental models in your toolkit, the more (Supports:: [[Adaptability|Flexible]]) you are, capable of adapting and combining multiple ways of looking and understanding reality, which makes you a better decision maker. You might even reach (Related:: [[Objectivism]]), of seeing reality "as it is".
Mental models are not true for every situation, nor are they permanent. They can and should be updated based on (Aka:: [[Testing|Experimentation]]). However, sometimes we are (Aka:: [[Attachment|too close]]) or our ego is too involved in something for us to see that it no longer works or needs updating. Instead of admitting our (Jump:: [[failure|mistakes]]), we double down, try to prove the world we are right, only digging ourselves deeper, or are tempted by (Jump:: [[Conformation Bias]]). That's why (Jump:: [[introspection|Self Reflection]]) is a critical part of the learning process.
Sometimes we lack the (Jump:: [[Feedback]]) necessary to do so, especially if we are distanced from the results of our decisions.
> [!Quote]- Introduction: Acquiring Wisdom
> - thinking better isn’t about being a genius. It is about the processes we use to uncover reality and the choices we make once we do. ([Location 115](https://readwise.io/to_kindle?action=open&asin=B0CSFH8VQS&location=115))
> - The more lenses used on a given problem, the more reality reveals itself. The more of reality we see, the fewer blind spots we have. The fewer blind spots we have, the better the options at our disposal. ([Location 157](https://readwise.io/to_kindle?action=open&asin=B0CSFH8VQS&location=157))
> - Without reflection, we cannot learn.[7] Without learning, we are doomed to repeat mistakes, ([Location 259](https://readwise.io/to_kindle?action=open&asin=B0CSFH8VQS&location=259))
### The Map is not the Territory
To (Jump:: [[Simplicity|simplify]]) the great (Jump:: [[Complexity]]) of the world, we create maps, which serve as a proxy for the territory. For example, we make maps for navigations, we use financial reports to measure the company's health, yet we fail when we think the map is a perfect representation of the territory (Reminds:: [[Goodhart’s Law]]), we are so fixated on the map that we forget that the simplification process removes (vital) information, and freezes it in time. By definition, the map can't contain all the information otherwise, the map would be the same size as the territory itself. A simplified version means less information, and lower quality, it is a limited snapshot of reality (Reminds:: [[shallow copy]]).
Unfortunately, we often subconsciously assume that if a map represents one aspect well, it is correct in other elements as well (Reminds:: [[Halo effect]]).
To overcome this problem:
1. **Reality is more important than the map** - reality changes, the map doesn't. Reality is complex and rich, the map isn't.
2. **Consider the map's creator** - maps reflect the agenda, values, and blind spots of their creators
> [!Quote]- The Map Is Not the Territory
> - Relying solely on maps can lead you to the wrong conclusion. You need to touch the territory. ([Location 443](https://readwise.io/to_kindle?action=open&asin=B0CSFH8VQS&location=443))
> - The faster the rate of change in the territory, the harder it will be for a map to keep up-to-date. ([Location 553](https://readwise.io/to_kindle?action=open&asin=B0CSFH8VQS&location=553))
> - Maps are not purely objective creations. They reflect the values, standards, and limitations of their creators. ([Location 557](https://readwise.io/to_kindle?action=open&asin=B0CSFH8VQS&location=557))
### Circle of Competence
Have (Supported:: [[Self-awareness]]), knowing exactly what your (Jump:: [[Competitive Advantage]]) is, what the borders of your (Jump:: [[Competence]]) are, what you are proficient in, and what you're clueless about.
Reality is constantly changing, so our circle will change as well. We have to (Jump:: [[Lifelong Learning|constantly learn]]) to maintain it. It is not achieved overnight, it takes a lot of effort. Those who invest only a short time might fall to (Jump:: [[Dunning-kruger effect]]).
To expand your circle of competence:
1. (Supports:: [[Imitation|imitate]]) other, use their hard-earned knowledge to progress faster than you could have on your own
2. Find a (Supports:: [[Mentor]]), someone who can see what you don't, an outsider's perspective that can provide valuable feedback.
3. (Jump:: [[Journaling|document]]) your thoughts, remember which choices you made and why, so that you can learn from it when you see the results (Aka:: [[post-mortem]])
> [!Quote]- Circle of Competence
> - There is no shortcut to understanding. Building a circle of competence takes years of experience, of making mistakes, and of actively seeking out better methods of practice and thought. ([Location 685](https://readwise.io/to_kindle?action=open&asin=B0CSFH8VQS&location=685))
### First Principles Thinking
First principles thinking is a form of (Jump:: [[Fundamentalism]]), we search for the basic, undeniable truth, and start reasoning from there instead of being fixated by the status quo. The world is like a pre-built Lego set, while we often do marginal changes based on what's already existing (Reminds:: [[Anchoring]]), we can theoretically create completely different things from the same materials.
First principles thinking requires heavy (Jump:: [[Critical Thinking|Skepticism]]), to challenge all that we know, all that we are told is true, and all of our assumptions. It is to break everything apart until we reach an undeniable truth. It is not easy, but usually this type of thinking is what leads to breakthroughs rather than incremental improvements.
> [!Quote]- First Principles Thinking
> - If we never learn to take something apart, test our assumptions about it, and reconstruct it, we end up bound by what other people tell us is possible. ([Location 890](https://readwise.io/to_kindle?action=open&asin=B0CSFH8VQS&location=890))
### Thought Experiment
Thought experiments are a form of mental (Jump:: [[Simulations]]) that can help us reason and discover things that are much harder to do or impossible to test (Aka:: [[Rationalism]]).
Common use cases for thought experiments:
1. **Test the impossible** - finding pure effects under perfect conditions such as the *veil of ignorance*, or *the experience machine*.
2. **Consider what if** - imagine different realities and try to guess the (Aka:: [[Counterfactual]]), to think what could have happened under different circumstances or if we acted differently.
3. **Uncovering intuitions** - we present a sterile case to see what our intuitions are about the subject, like the *trolley problem*
> [!Quote]- Thought Experiment
> - The power of thought experiments lies in their ability to create a simplified model of reality where we can test our ideas. ([Location 1158](https://readwise.io/to_kindle?action=open&asin=B0CSFH8VQS&location=1158))
### Second order Thinking
Second-order thinking means considering the reasons and follow-up consequences of our decisions.
Usually, we are so focused on the problem at hand that we forget to consider the broader implications (Aka:: [[Present Bias|Short term bias]]).
For example, cases like the *tragedy of the Commons* are an example because everyone is focused on their short-term gain, failing to see how they will all be worse off because of it. Similarly, cutting corners in high school might give you good grades, but then you will lack the skills necessary to successfully hold a job.
Remember that you can't accurately (Aka:: [[Prediction|predict]]) the future, so be careful of slippery slope arguments
### Probabilistic Thinking
Deciding what to do depends on the probabilities of each potential outcome. We should remember that it is all dependent on (Jump:: [[Context]]), each situation has its features that can influence the probability, but we should be familiar with the (Jump:: [[Statistical Distribution]])
and the (Jump:: [[conditional probability]]) of the "average case" we are in. For example, before launching a business, we should know what the probabilities of success are for the "average" business, and then adjust them based on our circumstances.
Other problems with thinking about probabilities are:
1. **Unequal distributions** - Some things are not normally distributed, or the consequences are not even in strength. For example, more people are late than early, and gambling your entire savings is more dangerous if you lose than beneficial if you win.
2. **Correlation is not causation** - We sometimes (Aka:: [[Correlation is not causation|imagine]]) causality where there is none. Only (Jump:: [[Hypothesis Testing]]) can help us sort out the random from the real (Reminds:: [[Regression towards the mean]])
### Inversion
(Jump:: [[Reverse Thinking|inversion]]) means to flip how we think about a problem. Instead of "how can I be happy?", ask "what makes me sad?", instead of "how do I sell more products?", ask "what do my clients want?". Often, taking a different viewpoint on the same question can lead to creative thinking (Aka:: [[Ideation]]). The good yet not obvious solutions are often cases of (Supports:: [[Addition by subtraction|less is more]]). We can be much happier if we stop doing things that make us sad, such as bad nutrition, social media, overworking, etc.
> [!Quote]- Inversion
> - Instead of asking, “How do I solve this problem?” inversion asks, “What would guarantee failure?” Instead of asking, “How can I achieve this goal?” it asks, “What is preventing me from achieving it?” By inverting the question, we can gain insights that our normal thought patterns might miss. ([Location 1772](https://readwise.io/to_kindle?action=open&asin=B0CSFH8VQS&location=1772))
### Occam's Razor
Simplicity is a tie-breaker in explanations, and the simpler answer is more likely to be true because it has fewer assumptions and conditions that might be false, or difficult to achieve. (Aka:: [[occam's razor]])
This is a tendency, not a rule, and it is useful only in areas where the issue can be simplified. Complex systems won't have a simple solution, or it will not work.
### Hanlon's Razor
Like Occam's Razor, the explanation that attributes the least malice is usually the right one (Aka:: [[Hanlon's Razor]]). For example, if someone cuts you off at traffic, it's more likely that they didn't see you or made a mistake than doing it on purpose.
> [!Quote]- Occam’s Razor
> - use: if all else is equal—that is, if two competing models both have equal explanatory power—it’s more likely that the simple solution suffices. ([Location 1800](https://readwise.io/to_kindle?action=open&asin=B0CSFH8VQS&location=1800))