Every Activation Boosted: Scaling General Reasoner to 1 Trillion Open Language Foundation Paper • 2510.22115 • Published 17 days ago • 81
Every Attention Matters: An Efficient Hybrid Architecture for Long-Context Reasoning Paper • 2510.19338 • Published 20 days ago • 110
view article Article Art of Focus: Page-Aware Sparse Attention and Ling 2.0’s Quest for Efficient Context Length Scaling By RichardBian and 19 others • 22 days ago • 14
view article Article Art of Focus: Page-Aware Sparse Attention and Ling 2.0’s Quest for Efficient Context Length Scaling By RichardBian and 19 others • 22 days ago • 14
view article Article Ring-flash-linear-2.0: A Highly Efficient Hybrid Architecture for Test-Time Scaling By RichardBian and 8 others • Oct 9 • 11
KaLM-Embedding/KaLM-embedding-multilingual-mini-instruct-v2.5 Feature Extraction • 0.5B • Updated 7 days ago • 7.88k • 39