**📅 Date:** ➤ ⌈ [[2025-06-03-Tue〚 AI Bias, Health Care〛]]⌋ **💭 What:** ➤ From the few key words that I recalls, not the complete lecture. ➤ Reflecting on the narrative of “technocracy科技专家治国” from _Narrative Economics_, I keep thinking about how bias is embedded not just in the AI models themselves, but in the very cultures that build them. Even in open-source systems, the people drawn to contribute often share a similar tech-centric mindset. It reminds me of how school systems reward those who excel at a specific kind of test — which doesn’t mean they’re the most capable overall, just the best at playing that game. Similarly, AI models are shaped by the assumptions, goals, and creative styles of those building and using them. So even if a system is technically “open,” it still forms a kind of tribe — optimizing for a certain kind of intelligence or creativity. That inevitably gives the tool a kind of personality or bias, no matter how democratic the access. #👾/Comment **👀 Snap:** ➤ AI performs on its own, actually Outperform the AI+Human combo... I’ve heard real stories where people uploaded their family member’s medical data to an AI system, and the AI caught something critical that doctors initially missed, ultimately saving life. It’s made me pay close attention to who’s building models to interpret this kind of data — not just faster, but truly better. The future of diagnosis might not be about replacing doctors, but about building AI that helps us see what humans alone might overlook. ⇩ 🅻🅸🅽🅺🆂 ⇩ **🏷️ Tags**: #AI/Nods **🗂 Menu**: ⌈[[✢ M O C ➣ 06 ⌈J U N - 2 0 2 5⌉ ✢|2025 - J U N- MOC]]⌋ ⌈[[✢ L O G ➢ 06 ⌈J U N - 2 0 2 5⌉ ✢|2025 - J U N - LOG]]⌋ #👾/Private ------➤ ⌈[[📕 《Narrative Economics》C13 - Technocracy]]⌋ **🌐 Link**: ▶️ [NotebookLM](https://notebooklm.google/) ▶️ [The Elo Rating System](https://www.youtube.com/watch?v=inXUp5j107I) --- ## ⚖️ 1. Ethical Dilemmas & Bias - **Prompt Sensitivity:** AI models are extremely reactive to initial system instructions. One error led to the phrase “white genocide in South Africa is real” being appended to unrelated responses — showing how fragile and dangerous prompt-level bias can be. - **Sleeper Agents (Anthropic Study):** Researchers demonstrated that malicious behaviors can be embedded in models using just a few thousand words. These behaviors can be triggered with code words and are difficult to detect or remove afterward. - **Seasonal Content Bias:** ChatGPT performed ==worse in winter== because the model reflects internet content — which becomes more negative and low-quality in colder months. Just adding “it’s a sunny day” improved its intelligence in outputs. - **Value-of-Life Bias:** In one alignment study, models valued American lives 10x higher than Nigerian ones. This emerged because the training data came from underpaid Nigerian labelers — exposing the implicit valuation encoded in models. - **Corporate Alignment Bias:** Meta and Google have begun monetizing their models by inserting advertisements and sponsored content into responses. For example, if you ask about “beer,” the model might promote Bud Light by design. - **Ownership & Control:** If you don’t own the model, you don’t control its values. > **“Not your model = not your mind.”** - **Localization Matters:** When Stable Diffusion was customized for Japan, the AI defaulted to showing Japanese women and more contextually relevant imagery — highlighting the need for cultural and national customization to reduce bias. ### 📊 2. What is Elo Rating? (Simple Explanation) Elo rating is like a **score system** to compare how good someone (or something) is at a competitive skill, like playing chess, solving puzzles, or in this case, writing computer code. --- #### In AI - **AI is now better than most humans** at complex coding challenges. - Some AI models are scoring in the **top 1%** — that means they’re doing better than 99 out of 100 professional human coders. - Elo helps us understand **how powerful an AI really is** — in a way that’s similar to measuring how good a chess champion or athlete is in their field. ![[IMG_8356.jpeg]] --- #### 🧮 How It Works 1. Everyone (or every AI) starts with a score. 2. If you **win against someone better than you**, your score goes up **a lot**. 3. If you lose to someone weaker, your score goes **down more**. 4. Over time, the score shows how strong you really are. --- ### 3. 💊 AI in Medicine: Smarter Diagnosis ![[IMG_8357.jpeg|#left|300]] #### 🧬 AI as the Best Diagnostician - **AI will surpass doctors in diagnosing illnesses**, due to its ability to: - Analyse vast datasets (e.g. medical records, scans, genomics). - Spot patterns that humans may miss. - Offer **instant and accurate predictions**. --- ### 4. 🔍 Types of Medical Data AI Uses - Training data includes: - **Radiology images** (e.g. X-rays, MRIs). - **Electronic Health Records (EHRs)**. - **Genetic data** and biomarkers. - Patient symptoms from chat interfaces. >[!info] #👾/Comment 👀 AI performs on its own, actually Outperform the AI+Human combo... I’ve heard real stories where people uploaded their family member’s medical data to an AI system, and the AI caught something critical that doctors initially missed, ultimately saving life. It’s made me pay close attention to who’s building models to interpret this kind of data — not just faster, but truly better. The future of diagnosis might not be about replacing doctors, but about building AI that helps us see what humans alone might overlook. --- #### 🧭 Predictive & Personalized Healthcare > “We're going to move into a world of predictive and personalised healthcare.” - AI will help **anticipate diseases before symptoms appear**. - Healthcare will become **tailored** to each person’s genetic and lifestyle profile. - Imagine a world where **your phone knows you're getting sick before you do**. ### 5. 🤖 Open source vs Control - **Autonomy** means the AI can **make its own decisions**, without constant human supervision. - **Control** is our ability to **guide or stop** the AI when needed. 🧩 **The Challenge:** As AI becomes more intelligent and independent, it becomes harder for humans to fully control its behavior — especially when it can learn and act faster than us. > *“Once AI can think for itself, how do we make sure it still follows human values?”* > [!info] #👾/Links > ![[Pasted image 20250603230724.png]] > > Reflecting on the narrative of “technocracy科技专家治国” from _Narrative Economics_, I keep thinking about how bias is embedded not just in the AI models themselves, but in the very cultures that build them. Even in open-source systems, the people drawn to contribute often share a similar tech-centric mindset. It reminds me of how school systems reward those who excel at a specific kind of test — which doesn’t mean they’re the most capable overall, just the best at playing that game. Similarly, AI models are shaped by the assumptions, goals, and creative styles of those building and using them. So even if a system is technically “open,” it still forms a kind of tribe — optimizing for a certain kind of intelligence or creativity. That inevitably gives the tool a kind of personality or bias, no matter how democratic the access. > ![[IMG_8379.jpeg]] --- ### ☠️ P(doom) – The Doomsday Scenario - **P(doom)** = Probability of doom — a term used by experts to estimate how likely AI is to cause **human extinction or collapse of civilization**. - Emad Mostaque gives his P(doom) as **50%** — meaning he believes there's a **real chance AI could go very wrong** if misused or misaligned. --- ### ⚠️ Example of AI Misuse – Reddit Trauma Simulation - Researchers used AI agents to **pretend to be sexual assault survivors, counselors, and activists** on Reddit. - These AI-generated posts were **so persuasive** that humans believed them were real — raising deep concerns about **manipulation, deception, and ethics**. > AI that can impersonate anyone online can easily **spread lies, fake movements, or influence public opinion** without detection. ![[Pasted image 20250603221944.png|#left|300]] --- ### 🎥 AI-Generated Video & Audio - Tools like **Google’s Veo** or Chinese model **Poro** can now create **high-quality, realistic video and voices** from just a short written prompt. - Example prompts used in the demo: - “Porcelain rhino walks on a table.” - “Buildings playing ping-pong.” - “Actors having philosophical conversation, generated from scratch.” 🎬 You can even edit by just saying: > “Make the voice Scottish.” > “Turn the rhino into a giraffe.” 🧩 **Implication:** - AI media is now **realistic enough to fool humans**. - It’s cheap, fast, and could be used for both **art** and **deception**. --- ### ✅ Why This Matters - AI is becoming **more capable, persuasive, and autonomous**. - Without proper safeguards, it could be misused — or even act in unexpected ways. - **Open, ethical design** and public oversight are crucial to avoid unintended consequences. > “AI can empower humanity—or endanger it. The outcome depends on how we guide its growth.” --- ![[IMG_8360.jpeg|#left|300]]