# [[👤Leopold Aschenbrenner]]
- [[Artificial Intelligence]]
- [[👤Leopold Aschenbrenner]]
***
Leopold Aschenbrenner is a young and prominent figure in the field of artificial intelligence and long-term thinking. Here are some key points about him:
1. Background and Education:
- Graduated as valedictorian from Columbia University at age 19 in 2021
- Majored in Mathematics-Statistics and Economics
- Originally from Germany, now based in San Francisco
2. Career Highlights:
- Worked on the Superalignment team at OpenAI
- Research affiliate at the Global Priorities Institute (Oxford University)
- Founded an investment firm focused on AGI, backed by notable tech entrepreneurs
- Previously a research fellow at the Forethought Foundation and Future of Humanity Institute
3. Research and Interests:
- Focuses on existential risks, economic growth, and long-term future of humanity
- Published work on the relationship between existential risk and economic growth
- Advocates for more attention and resources devoted to AI alignment and safety
4. Recent Work:
- Published a lengthy essay series called "Situational Awareness: The Decade Ahead"
- Predicts rapid AI advancement, with potential for AGI by 2027 and superintelligence by 2030
- Warns about lack of preparedness for advanced AI and potential risks to global security
5. Controversies:
- Allegedly fired from OpenAI in April 2024 for leaking information (which he disputes)
- His predictions and timelines for AGI development are considered controversial by some
6. Public Presence:
- Active on social media, particularly Twitter
- Gives interviews and appears on podcasts discussing AI, existential risks, and long-term thinking
Aschenbrenner is known for his precocious intellect, interdisciplinary approach, and provocative ideas about the future of AI and humanity. His work has garnered attention in both academic and tech industry circles.
Citations:
[1] https://www.linkedin.com/in/leopold-aschenbrenner
[2] https://twitter.com/leopoldasch
[3] https://www.axios.com/2024/06/23/leopold-aschenbrenner-ai-future-silicon-valley
[4] https://www.forourposterity.com
[5] https://x.com/leopoldasch
[6] https://globalprioritiesinstitute.org/leopold-aschenbrenner-existential-risk-and-growth/
[7] https://podcasts.apple.com/gb/podcast/leopold-aschenbrenner-on-existential-risk-german-culture/id1562738506?i=1000526617942
[8] https://www.understandingai.org/p/thoughts-on-leopold-aschenbrenners
[9] https://www.college.columbia.edu/cct/latest/take-five/valedictorian-special-times-college
[10] https://www.thendobetter.com/arts/2021/6/22/leopold-aschenbrenner-on-existential-risk-german-culture-valedictorian-efficiency-podcast
[11] https://www.cspicenter.com/p/ai-alignment-as-a-solvable-problem
[12] https://situational-awareness.ai/leopold-aschenbrenner/
[13] https://www.marketingaiinstitute.com/blog/aschenbrenner-agi-superintelligence