AI Lab Tracker
Labs
Timeline
LongCat ZigZag Attention
paper
2025-12-30
Meituan
Introduces LoZA (LongCat ZigZag Attention), a sparse attention scheme for efficient long-context scaling in LongCat models.
Paper (arXiv)
HuggingFace
GitHub
Paper
arXiv:
2512.23966
attention
efficiency
scaling
Related
longcat-flash-thinking