AI Safety at UCLA Intro Fellowship: Diffusion Track

Table of Contents

  1. Week 1: Preventing an AI-related catastrophe + Scaling Hypothesis
  2. Week 2: The future is going to be wild + Image Generation Mathematical Framework
  3. Week 3: Introducing autoencoders, KL divergence, and unsolved Problems in AI Safety.
  4. Week 4: AI Safety Field Background + Deep dive into VAEs
  5. Week 5: AI Alignment Failure Modes + Mathematics behind VAEs.
  6. Week 6: Open Problems in AI X-Risk + Diffusion Models

Core Readings: (70 mins)

  1. Intelligence Explosion (20 min)
  2. Circuits, Distilled (50 min)

Learning Goals:

  1. Familiarize yourself with the arguments for AI being an existential risk
  2. Understand the rapid scaling of modern AI models and its implications for our interpretability methods.

Week 2: The future is going to be wild + Image Generation Mathematical Framework

Core Content: (2h 15min)

Theoretical Readings (75 min):

  1. AI and Compute (5 min)
  2. The Bitter Lesson (10 min)
  3. All Possible Views About Humanity’s Future are Wild (15 min)
  4. “This can’t go on” (25 min)
  5. Intelligence Explosion: Evidence and Import (20 min)

Practical Readings (60 min):

  1. (if unfamiliar) 3 Blue 1 Brown Neural Networks, Chapters 1 and 2 (30 min)
  2. DDPMs, Part 1 - Autoencoders (30 mins).

Learning Goals:

Theoretical

  1. Understand the relationship between growing compute and general capabilities.
  2. Gain experience with the types of datasets used in modern AI systems.
  3. See how AI could impact a wide range of industries.
  4. Reflect on the radical impact AI can have on the future of humanity
  5. Reflect on the strange possibilities of our economic future.
  6. Reflect on the speed with which AI will transition from powerful to superintelligence.

Practical

  1. Understand neural networks and deep learning.
  2. Understanding the intuition behind compression within autoencoders.

Week 3: Introducing autoencoders, KL divergence, and unsolved Problems in AI Safety.

Core Content: (1h 15 min)

Theoretical Readings (30 min):

  1. Why AI alignment could be hard with modern deep learning (20 mins)
  2. Intuitively Understanding KL Divergence (10 mins)

Notebook Exercises (45 mins):

  1. (Optional) Variational Autoencoder (VAE) Intuition (45 mins): covers material a little bit ahead; read on if you’d like to dive into the mathematics!

Learning Goals:

Theoretical

  1. Understand issues with only using performance to evaluate classifiers.

Practical

  1. Establish an intuition of Kullback-Liebler (KL) divergence and what it means as a distance metric between random variables.

Week 4: AI Safety Field Background + Deep dive into VAEs

Core Readings: (1h 45 min)

Theoretical Readings (1h 15 mins):

  1. A Bird’s Eye View of the ML Field (45 min)
  2. Paul Christiano: Current work in AI alignment (30 min)

Notebook Exercises (30 mins):

  1. DDPM_part_2_vae.ipynb (30 min).

Learning Goals:

  1. Understand how ML research is conducted and how it affects AI safety research.
  2. Be able to evaluate if a research agenda advances general capabilities.
  3. Learn about the variety of different research approaches tackling alignment.
  4. Understand the mechanics of Variational Autoencoder models and practice writing your own.

Week 5: AI Alignment Failure Modes + Mathematics behind VAEs

Core Readings: (1h 30 min)

Theoretical Readings (45 mins):

  1. X-Risk Analysis for AI Research (Appendix A pg 13-14) (10 min)
  2. What Failure Looks Like (10 min)
  3. Clarifying What Failure Looks Like (25 mins)

Notebook Exercises (45 mins):

  1. Diffusion, distilled (45 mins).

Learning Goals:

  1. Be able to determine how an AI safety project may reduce X-risk.
  2. Evaluate the failure modes of misaligned AI.
  3. Understand the factors that lead to value lock-in.

Week 6: Open Problems in AI X-Risk + Diffusion Models

Core Readings: (2h)

Theoretical Readings: (1h 15 mins)

  1. Open Problems in AI X-Risk (60 min)
  2. AI Governance: Opportunity and Theory of Impact (15 min)

Notebook Exercises (45 mins):

  1. Full DDPM notebook - complete all exercises (45 mins).

Learning Goals:

  1. Pick a research agenda you find particularly interesting (perhaps to pursue later).
  2. Understand the role AI governance plays in the broader field of AI safety.

Final Reflection

If you were to pursue a research question/topic in AI safety, what would it be? What area of AI safety do you find most interesting? What area of AI safety do you find most promising?

Note: The DDPM notebook is incomplete (missing its training loop). Can you write it?