In Depth
Introduced in a 2022 Google paper, chain-of-thought prompting (CoT) demonstrated that large models can solve problems they fail on when prompted directly, simply by walking through intermediate steps. Variants include zero-shot CoT ("think step by step"), self-consistency (sampling multiple chains and majority-voting), and tree-of-thought (exploring branching reasoning paths). CoT is now standard practice for reasoning-heavy LLM applications.