Papers
arxiv:2510.22115

Every Activation Boosted: Scaling General Reasoner to 1 Trillion Open Language Foundation

Published on Oct 25
· Submitted by Zhang Zhiqiang on Nov 4
#1 Paper of the day
Authors:
,
Ang Li ,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Ling 2.0, a reasoning-oriented language model series, achieves high efficiency and accuracy through a Mixture-of-Experts paradigm, sparse activation, and innovative training techniques.

AI-generated summary

We introduce Ling 2.0, a series reasoning-oriented language foundation built upon the principle that every activation boosts reasoning capability. Designed to scale from tens of billions to one trillion parameters under a unified Mixture-of-Experts (MoE) paradigm, Ling 2.0 emphasizes high sparsity, cross-scale consistency, and efficiency guided by empirical scaling laws. The series includes three non-thinking (instruct) models - Ling-mini-2.0, Ling-flash-2.0, and Ling-1T - ranging from 16B to 1T total parameters and achieving up to 7-fold active-compute efficiency compared with dense counterparts. Ling 2.0 integrates coordinated innovations across model architecture, pre-training, post-training, and infrastructure: a high-sparsity MoE with MTP for efficient reasoning, reasoning-oriented data and mid-training CoT activation, reinforcement-based fine-tuning (DFT, Evo-CoT), and full-scale FP8 training with fine-grained heterogeneous pipelines. At the trillion scale, Ling-1T establishes a new Pareto frontier of reasoning accuracy versus computational efficiency, demonstrating that sparse activation, when properly aligned with reasoning objectives, enables scalable and efficient intelligence. Collectively, Ling 2.0 provides a coherent, open, and efficient foundation for advancing future reasoning and thinking models, including the Ring series built upon the same base.

Community

Paper author Paper submitter

Technical Report for Ling 2.0 series, including model architecture, pre-training, training infrastructure, post-training of the reflex-grade non-thinking version and comprehensive evaluations.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.22115 in a dataset README.md to link it from this page.

Spaces citing this paper 10

Collections including this paper 2