AdaGC: Improving Training Stability for Large Language Model Pretraining
paperProposes AdaGC, an adaptive per-tensor gradient clipping method that bounds each tensor's gradient norm relative to an exponential moving average of its own historical clipped values. Unlike global gradient clipping (GlobalGC), AdaGC is optimizer-agnostic (validated with Muon and Lion), adds negligible memory overhead, and reduces communication cost under hybrid-parallel distributed training.
Addresses loss spikes arising from data outliers, hardware/transient faults, numerical precision, and hyperparameter settings — recurring failure modes during frontier LLM pretraining. Evaluated on Llama-2 7B, Mixtral 8×1B, and ERNIE 10B-A1.4B: eliminates training spikes entirely across all three models while improving downstream accuracy by 1.32%, 1.27%, and 2.48% respectively over GlobalGC. A practical, drop-in replacement for the global-norm clipping used in most LLM training stacks.
Paper
arXiv: 2502.11034