In Depth
Tree of Thought (ToT) extends chain-of-thought reasoning from a single linear path to a tree structure of multiple reasoning paths explored in parallel. At each step, the model generates multiple possible continuations, evaluates their promise, and selectively explores the most promising branches while pruning dead ends. This mimics deliberate human problem-solving where we consider multiple approaches before committing.
ToT is particularly effective for problems that require exploration, backtracking, or evaluating multiple strategies, like puzzle solving, creative writing, strategic planning, and mathematical proof construction. The model acts as both the reasoner (generating steps) and the evaluator (judging which paths are promising), using search algorithms like breadth-first or depth-first search to navigate the reasoning tree.
While ToT produces better solutions for complex problems, it requires more computation than simple chain-of-thought since multiple paths are explored. This makes it most valuable for high-stakes decisions where solution quality justifies additional inference cost. The concept has influenced the design of reasoning models and agentic systems that plan and evaluate multiple action sequences before execution.