Overview

Andre Karpathy argues that LLMs should be treated as simulators of perspective rather than conversational partners. Using pronouns like “you” pushes models toward averaged, generic responses, while asking them to simulate specific roles yields more interesting and useful outputs. This challenges the common practice of anthropomorphizing AI systems.

Key Takeaways

  • Treat LLMs as simulators, not conversation partners - asking them to roleplay specific perspectives (researcher, CTO, etc.) produces more nuanced responses than generic prompts
  • Avoid using ‘you’ pronouns when prompting - this pushes models toward averaged, mediocre outputs that reflect training data rather than targeted expertise
  • Strong mental models of AI capabilities protect against trend volatility - understanding how these systems actually work prevents getting swept up in changing expert opinions
  • The pendulum swings on AI practices - roles matter again after being dismissed, showing the importance of testing approaches rather than following proclamations
  • Challenge anthropomorphism in AI interactions - stopping the tendency to treat models as human-like entities unlocks their true potential as perspective simulators

Topics Covered