Andrej Karpathy — “We’re summoning ghosts, not building animals”
Technology

Andrej Karpathy — “We’re summoning ghosts, not building animals”

2:26:08
October 17, 2025
Dwarkesh Patel
Added by: Ajeet Singh Kushwaha

What You'll Learn

  • Understand the current limitations of Large Language Models (LLMs) and the challenges in achieving Artificial General Intelligence (AGI).
  • Explore the potential of AI in education, including personalized learning and the creation of AI tutors.
  • Analyze the impact of AI on the job market and the importance of focusing on human augmentation rather than complete automation.
Video Breakdown
Andrej Karpathy discusses the future of AI, contrasting current approaches with the development of animal intelligence and highlighting the limitations of current LLMs. He explores the potential of AI in education, coding, and knowledge work, emphasizing the importance of data quality, personalized learning, and human augmentation rather than replacement. The discussion also delves into the ethical considerations of AI deployment, the challenges of achieving AGI, and the potential impact on economic growth and society.
Key Topics
AI Agents LLM Limitations AI Coding Models Reinforcement Learning Synthetic Data Data Quality
Video Index
AI Development and the Quest for AGI
This module explores the history of AI, the shift towards deep learning, and the challenges in creat...
This module explores the history of AI, the shift towards deep learning, and the challenges in creating AI agents that replicate human-like intelligence. It covers the limitations of current LLMs, the problems with reinforcement learning, and the need for better supervision methods.
The Decade of AI Agents and the Evolution of Intelligence
0:48
The Decade of AI Agents and the Evolution of Intelligence
0:48 - 12:51
Discusses the timeline for AI agent development, contrasting it with shorter-term predictions and reflecting on the history of AI and the shift towards deep learning.
AI Agents Decade of Agents Deep Learning History Reinforcement Learning Pre-Training vs. Evolution
Pre-training, In-Context Learning, and the Limits of Current LLMs
12:51
Pre-training, In-Context Learning, and the Limits of Current LLMs
12:51 - 24:55
Explores the differences between pre-training and in-context learning, drawing analogies to human cognition and discussing the challenges in replicating human-like intelligence.
Pre-Training vs. in-Context Learning Knowledge vs. Algorithms Human Cognition Analogies Continual Learning Sparse Attention
AI Coding Models and the Future of Machine Learning
24:55
AI Coding Models and the Future of Machine Learning
24:55 - 36:57
Discusses the future of AI and machine learning, focusing on the development of the nanochat repository and the capabilities and limitations of AI coding models.
Nanochat Repository AI Coding Models Gradient Descent Translation Invariance Code Generation
Reinforcement Learning Limitations and the Autonomy Slider
36:57
Reinforcement Learning Limitations and the Autonomy Slider
36:57 - 48:59
Discusses the current state and limitations of AI models, particularly LLMs, focusing on the problems with reinforcement learning and the challenges of process-based supervision.
Reinforcement Learning Limitations Process-Based Supervision Challenges LLM Judges and Adversarial Examples AI as a Continuum Autonomy Slider
Improving LLMs: Data, Generalization, and Efficiency
This module focuses on methods to improve LLMs, including using synthetic data and reflection, while...
This module focuses on methods to improve LLMs, including using synthetic data and reflection, while addressing the problem of model collapse and the trade-off between memorization and generalization. It also touches on the potential for smaller, more efficient cognitive cores.
Synthetic Data, Model Collapse, and the Memorization vs. Generalization Trade-off
48:59
Synthetic Data, Model Collapse, and the Memorization vs. Generalization Trade-off
48:59 - 1:01:07
Discusses methods to improve LLMs, including using synthetic data and reflection, while addressing the problem of model collapse and the trade-off between memorization and generalization.
LLM Judges Synthetic Data Generation Model Collapse Entropy Memorization vs. Generalization
AI Model Size, Data Quality, and Measuring AGI Progress
1:01:07
AI Model Size, Data Quality, and Measuring AGI Progress
1:01:07 - 1:13:06
Discusses the future of AI model sizes, the quality of training data, and how to measure progress towards AGI, focusing on the automation of knowledge work and the potential for AI to augment rather than replace human jobs.
AI Model Size Training Data Quality AGI Progress Metrics Automation of Knowledge Work AI Augmentation of Jobs
AI Deployment, Automation, and the Future of Work
This module explores the impact and deployment of AI, particularly in coding versus other domains, a...
This module explores the impact and deployment of AI, particularly in coding versus other domains, and discusses the potential for loss of control and understanding as AI progresses. It also delves into the potential impact of AI on economic growth and the evolutionary history of intelligence.
AI Deployment in Coding and the Automation of Knowledge Work
1:13:07
AI Deployment in Coding and the Automation of Knowledge Work
1:13:07 - 1:25:11
Discusses the impact and deployment of AI, particularly in coding versus other domains like language-based tasks and slide creation, and explores the potential for loss of control and understanding as AI progresses.
AI Deployment in Coding Automation of Knowledge Work Loss of Control and Understanding Superintelligence Continuous Exponential Growth
AI and Economic Growth: Hyper-Exponential Increase or Continued 2% Growth?
1:25:11
AI and Economic Growth: Hyper-Exponential Increase or Continued 2% Growth?
1:25:11 - 1:37:13
Explores differing perspectives on the potential impact of AI on economic growth, debating whether it will lead to a hyper-exponential increase or simply maintain the existing 2% growth trajectory.
AI and Economic Growth Intelligence Explosion Evolution of Intelligence Industrial Revolution Analogy Labor Constraints
The Challenges of AI Deployment and the Vision for AI-Powered Education
This module discusses the challenges of AI deployment, drawing parallels to self-driving cars, and t...
This module discusses the challenges of AI deployment, drawing parallels to self-driving cars, and transitions into a discussion about education and the vision for Eureka as a personalized AI education platform. It emphasizes the importance of human empowerment through education.
Self-Driving Cars, AI Deployment, and the March of Nines
1:37:13
Self-Driving Cars, AI Deployment, and the March of Nines
1:37:13 - 1:49:14
Discusses the evolution of intelligence, the potential for LLMs to develop culture and self-play, and draws parallels between the challenges of self-driving car development and the deployment of AI in other domains.
Scalable Intelligence LLM Culture and Self-Play Demo-to-Product Gap March of Nines Self-Driving vs. AI Deployment
Eureka: A Starfleet Academy for Technical Knowledge
1:49:14
Eureka: A Starfleet Academy for Technical Knowledge
1:49:14 - 2:01:18
Discusses the analogy between self-driving cars and AI deployment, highlighting the challenges and differences in scalability, economics, and societal impact, and transitions into a discussion about education and the vision for Eureka.
Self-Driving Cars vs AI AI Deployment Challenges Eureka - Starfleet Academy Personalized AI Education Human Empowerment through Education
Ramps to Knowledge and the Future of Education with AI
2:01:18
Ramps to Knowledge and the Future of Education with AI
2:01:18 - 2:13:19
Discusses the current limitations of AI in education and outlines the speaker's vision for building ramps to knowledge through a combination of technical and human elements.
AI Tutors Limitations Ramps to Knowledge Llm101N Course Post-AGI Education Human Flourishing
Effective Teaching and Learning Methodologies
This module discusses the future of learning with AI, the importance of physics in developing proble...
This module discusses the future of learning with AI, the importance of physics in developing problem-solving skills, and effective teaching strategies, including simplifying complex topics and understanding the student's perspective. It emphasizes the importance of re-explaining concepts to solidify understanding.
The Future of Learning with AI and the Importance of Physics
2:13:19
The Future of Learning with AI and the Importance of Physics
2:13:19 - 2:25:21
Discusses the future of learning with AI, the importance of physics in developing problem-solving skills, and effective teaching strategies.
AI Tutors Physics Education Effective Teaching Curse of Knowledge Learning Methodologies
Re-explaining Concepts for Solidified Understanding
2:25:21
Re-explaining Concepts for Solidified Understanding
2:25:21 - 2:25:42
The speaker emphasizes the importance of re-explaining concepts to solidify understanding and identify gaps in knowledge.
Re-Explaining Understanding Knowledge Manipulation Knowledge Gaps Reconciling Knowledge
Questions This Video Answers
What are the main limitations of current AI models, particularly LLMs?
Current LLMs struggle with continual learning, knowledge distillation, process-based supervision, and generalizing beyond their training data. They often lack a deep understanding of the world and can be prone to model collapse.

How can AI be used to improve education?
AI can personalize learning experiences, provide tutor-like support, and create ramps to knowledge by simplifying complex topics and adapting to individual student needs. Platforms like Eureka aim to provide personalized AI education.

What is the potential impact of AI on the job market?
AI has the potential to automate knowledge work and augment human capabilities. While some jobs may be automated, the focus should be on using AI to enhance human productivity and create new opportunities for collaboration.

What are some key challenges in deploying AI systems, as illustrated by the self-driving car analogy?
Deploying AI systems faces challenges related to scalability, economics, safety, and societal impact. The 'march of nines' refers to the difficulty of achieving near-perfect reliability, and there's often a significant gap between successful demos and robust, real-world products.
People Also Asked




Related Videos

Want to break down another video?

Break down another video