modified: 2024-12-25T14:26:04-05:00
# Chapter 1
- read chapter 1 - a few thoughts
- system 1 - impulsive, automatic, quick, efficient, low effort, no voluntary control
- system 2 - focused attention, computation, slower, logical, rational, less efficient, "I", consciousness
- Gorilla suit experiment - system 1 and system 2 share a limited pool of cognitive resources. When system 2 is heavily engaged, system 1 is less effective - in this case, at scanning and peripheral awareness.
- However, participants were shocked to hear that they missed such a thing. We are often blind of our own cognitive limitations - we don't know what we don't know.
- I wonder if there is a way to increase the capacity of cognitive resources, of it it's a value that's hardwired into our brain. Culdasa mentions the concept of using meditation to increase "raw conscious power," but its relevance needs further investigation.
>The gorilla study illustrates two important facts about our minds: we can be blind to the obvious, and we are also blind to our blindness.
- System 2 is associated with self control and discipline, preventing you from performing actions or engaging in behavior with negative consequences. Potential link to the pre-frontal cortex, but Kahneman says that the two systems aren't necessarily tied to any specific part of the brain.
- Word column activity demonstrates that it's more difficult to say upper/lower left/right when theres conflicting inputs from system 1. It's impossible to turn off system 1, even if system 2 is engaged. System 1's input into the decision making algorithm will always be present, despite efforts to turn it off.
- Müller-lyer illusion - you can set cognitive rules for preventing system 1's errors, but system 1's constant activation will continue to warp your perception. The brain can make a flag for lines that have the format of the Muller-Iyer illusion, but your brain will still perceive one line is longer than the other.

- Another example: a doctor might feel sympathy for a patient who has had repeated failed treatments, but objectively, it's a warning sign. However, the doctor won't be able to turn off his impulse to feel sympathy.
- This implies that illusions can and will stay active, even if system 2 has flagged them as illusions.
Important paragraph from the book:
> The question that is most often asked about cognitive illusions is whether they can be overcome. The message of these examples is not encouraging. Because System 1 operates automatically and cannot be turned off at will, errors of intuitive thought are often difficult to prevent. Biases cannot always be avoided, because System 2 may have no clue to the error. Even when cues to likely errors are available, errors can be prevented only by the enhanced monitoring and effortful activity of System 2. As a way to live your life, however, continuous vigilance is not necessarily good, and it is certainly impractical. Constantly questioning our own thinking would be impossibly tedious, and System 2 is much too slow and inefficient to serve as a substitute for System 1 in making routine decisions. The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high. The premise of this book is that it is easier to recognize other people’s mistakes than our own.
- The compromise presented is similar to the compromise i proposed in [Discipline - First Principles Analysis](Discipline%20-%20First%20Principles%20Analysis.md) in that both recognized a compromise. I wrote in my analysis: "Through practice and repeated diligent efforts, the meta-awareness decision making pathway converges towards (but never exactly becomes) the default."
- aside: I wonder where meta-cognitive awareness falls within the 2 systems. I think it's a subset of system 2 thinking, and could actually be thought of as a kind of "system 3."
- However, Kahneman seems to imply that system 2 override is a constant ongoing activity that doesn't become easier over time, as system 1 cannot be turned off.
- At the same time, he does seem to suggest that certain system 2 activities become more automatic as skill increases, the load on system 2 is offloaded to system 1. It's unclear whether this is situational or generalized. Leaning towards situational as of now.
- System 2 also typically endorses system 1 and rationalizes its behavior, which effectively hijacks the brain into believing that the incorrect decision is logical. Thus, incorrect decisions often "feel right"
- System 1 and system 2 reminds me of NP-hard problems in computer science, where sometimes we opt for a more efficient but incorrect "heuristic" algorithm over a computational expensive but complete solution.
Additional thoughts on relevance:
- This book seems to provide a strong foundational framework for analyzing cognitive biases. But do all cognitive biases fit into the framework? This will be important for trading applications.
- Here are 5 randomly generated cognitive biases: confirmation bias, anchoring bias, availability heuristic, Dunning-Kruger, sunk cost. Are all of these rooted in system 1 mirages?
- In this case, yes - all 5 of these biases can largely be attributed or rooted in system 1 thinking that can be overrode by system 2, but it doesn't yet prove that all existing cognitive biases follow the same pattern.
- Meta-analysis (analysis of analysis): in my above analysis of this chapter, I actually relied on the following method: read / listen to the chapter in full, make note of any thoughts that popped up, and go through the section again to review the topics that were noteworthy. But this could be an over-reliance on system 1. We know that we are often unaware of what we don't know, and I'm wondering if this method actually misses alot of useful information.
- One way to combat the flaws here would be to use AI for alternative analysis.
- The goal here seems to be identifying system 1 illusions and using system 2 as an override mechanism to engage the correct and objective thinking and behavior. The approach includes recognizing the limitations of human cognition and opting for a compromise, as most complex problems in life require.
Key questions:
- can system 1 be trained so that its outputs converge towards correctness?
- connect "the mind illuminated" to Kahneman's framework
Expansion:
- define what culdasa means by "raw conscious power"
- solidify stance on system 2 and system 1 interaction, and the override of cognitive biases.
- habituation of system 2 responses, offloading of cognitive load and decisions
- "prompt engineering" for usage of AI to extract more value out of analysis.
# Chapter 2
- System 2 is easily pushed to its limit, like with the add-1 or add-3 task. These tasks demonstrate the surprisingly limited capacity of system 2.
- Pupil's dilate correlatively to mental effort expended.
> An image came to mind: mental life—today I would speak of the life of System 2—is normally conducted at the pace of a comfortable walk, sometimes interrupted by episodes of jogging and on rare occasions by a frantic sprint. The Add-1 and Add-3 exercises are sprints, and casual chatting is a stroll.
- Mental overload can cause "blindness" as attention overloads the brain's capacity.
- Switching between tasks quickly is particularly difficult.
> What makes some cognitive operations more demanding and effortful than others? What outcomes must we purchase in the currency of attention? What can System 2 do that System 1 cannot? We now have tentative answers to these questions.
A thought that occurred to me is that system 1 and system 2 are at play when i listen to audiobooks when reading to enhance my focus and understanding. System 1 cannot be turned off and keeps system 2’s attention on the material.
# Chapter 3
- Ego depletion is an interesting concept that has implications for how we approach self control and cognitive load.
- Also, glucose levels are correlated with discipline? Obviously this is not definitive, but it's worth considering if glucose can restore ego depletion.
- The central mechanism behind faulty reasoning or cognitive biases is system 1. System 2 thinking uses logical deduction to arrive at a conclusion, while system 1 starts with a conclusion and conjures supporting reasoning.
- Example: in trading, we can imagine a scenario where system 1 creates a trade idea based on "intuition," and system 2 rushes to create biased arguments for why this trade is good. This is in opposition to analyzing the charts first, and arriving at a conclusion of long/short/do nothing. This is probably confirmation bias.
- Imagine a scenario where price is moving up, and we feel the need to jump in and chase the price. This is system 1's impulsive proposal, and system 2 may opt to agree and find evidence to support this action.
- Kahneman mentions flow state, in which cognitive load feels effortless. Flow state is characterized by effortless attention, intrinsic motivation, and loss of distraction.
> People who experience flow describe it as “a state of effortless concentration so deep that they lose their sense of time, of themselves, of their problems,”... In a state of flow, however, maintaining focused attention on these absorbing activities requires no exertion of self-control, thereby freeing resources to be directed to the task at hand.
- The law of least effort: humans gravitate towards the path that requires the least amount of effort, typically opting for system 1 over system 2.
- The idea of flow seems a proposed idea to lessen the effects of ego depletion.
- Self-control requires attention and effort.
- Ego depletion explains why, in my personal experience, maintaining a state of meta-cognitive awareness during a meditation sit causes a period of weakened self-control afterwards.
- Stanovich's idea is that intelligence is not a predictor of susceptibility to cognitive bias, but rather a separate trait he calls "rationality."
- Kahneman presents the classic marshmallow test of delay-ment of gratification, and refers to the supervisory function of system 2.
- Stanovich's idea of rationality supports my proposal of a kind of "system 3" that i mentioned in my Chapter 1 notes, which can be thought of as a subset of system 2. This implies that intelligence is not a determinant of rationality.
# Aside: Applications
Depression:
How does depression affect system 1 and system 2? As someone who has not experienced severe depression, I would imagine that a depressed person's system 1 response to anything would be one of apathy, disinterest, or boredom. It likely takes cognitive willpower and discipline to muster up the energy to "override" this inherent lack of motivation. A depressed person who previously enjoyed social outings might feel overwhelming apathy at a group trip, requiring system 2 override to will themselves to go through the motions. Over time, system 2 probably becomes worn out as ego depletion and fatigue sets in.
Can self confidence be explained by S1 and S2? Self-confidence is the belief that you can achieve your goals, as long as they are within reason. Self-confidence seems more like a subconscious belief rather than a willful one - it stands to reason that a person who is not self confident can try to convince themselves to be more confident, but their efforts will ultimately fall short. This implies that there are qualities or characteristics that stem from S1 that are at least temporarily outside of our control, but there may be ways to influence S1.
One large factor that influences S1 is likely upbringing and the amount of positive feedback that you hear from your community (parents, coaches, teachers). This teaches the subconscious how much it should value itself. It's possible that using AI can overcome a chronic lack of supportive people.
It appears that S2 can quite often become corrupted by S1, which is only truly obvious during periods of intense introspection like meditation. Without acute introspective awareness, these things go largely unnoticed and all observations are biased, as S2 might find itself buying into some line of reasoning that serves S1's impulsive objectives.
# Chapter 4