ReTool: Reinforcement Learning for Strategic Tool Use in LLMs
paperRL framework for teaching LLMs strategic tool use by dynamically interleaving real-time code execution within natural language reasoning. Uses outcome-driven reinforcement learning (no supervised tool-use examples needed) with a cold-start phase using synthetic code-augmented reasoning traces.
ReTool-32B achieves 67% on AIME 2024 (400 training steps), reaching 72.5% in extended settings — exceeding OpenAI o1-preview by 27.9%. Dramatically outperforms text-only RL (40%, 1080 steps), showing that code execution as a tool enables both faster convergence and higher ceilings. Exhibits emergent self-correcting code generation. Built on VeRL training framework. By Feng, Huang, Qu, Zhang, Qin, Zhong, Jiang, Chi, Zhong (ByteDance Seed).