gabriellarson commited on
Commit
8f3d0c5
·
verified ·
1 Parent(s): b36ad08

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ pipeline_tag: text-generation
5
+ base_model:
6
+ - inclusionAI/GroveMoE-Inst
7
+ ---
8
+ # GroveMoE-Inst
9
+ </div>
10
+ <p align="left">
11
+ 🤗 <a href="https://huggingface.co/collections/inclusionAI/grovemoe-68a2b58acbb55827244ef664">Models</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2508.07785">Paper</a> &nbsp&nbsp | &nbsp&nbsp 🔗 <a href="https://github.com/inclusionAI/GroveMoE">Github</a>&nbsp&nbsp
12
+
13
+ ## Highlights
14
+
15
+ We introduce **GroveMoE**, a new sparse architecture using **adjugate experts** for dynamic computation allocation, featuring the following key highlights:
16
+
17
+ - **Architecture**: Novel **adjugate experts** grouped with ordinary experts; shared computation is executed once, then reused, cutting FLOPs.
18
+ - **Sparse Activation**: 33 B params total, only **3.14–3.28 B** active per token.
19
+ - **Traning**: Mid-training + SFT, up-cycled from Qwen3-30B-A3B-Base; preserves prior knowledge while adding new capabilities.
20
+
21
+ ## Model Downloads
22
+
23
+
24
+ | **Model** | **#Total Params** | **#Activated Params** | **HF Download** |**MS Download** |
25
+ |:---------:|:-----------------:|:---------------------:|:------------:|:------------:|
26
+ | GroveMoE-Base | 33B | 3.14~3.28B | [🤗 HuggingFace](https://huggingface.co/inclusionAI/GroveMoE-Base) | [📦 ModelScope](https://modelscope.cn/models/cccnju/GroveMoE-Base) |
27
+ | GroveMoE-Inst | 33B | 3.14~3.28B | [🤗 HuggingFace](https://huggingface.co/inclusionAI/GroveMoE-Inst) | [📦 ModelScope](https://modelscope.cn/models/cccnju/GroveMoE-Inst) |
28
+
29
+
30
+
31
+ ## Performance
32
+
33
+ | Model | Activated Params | MMLU-Pro | SuperGPQA | GPQA-Diamond | OlympiadBench | Omni-math | AIME'25 | MultiPL-E | LiveCodeBench v6 |
34
+ |:-----:|:----------------:|:------------:|:-------------:|:------------:|:-----------------:|:------------:|:------------------:|:------------------:|:------------------:|
35
+ |Llama4-Scout| 17B | 64.9 | 42.0 | 55.6 | 56.6 | 30.2 | 10.0 | 45.0 | 32.0 |
36
+ |Qwen3-30B-A3B| 3B | 63.3 | 40.5 | 51.7 | 60.3 | 33.7 | 21.7 | 66.0 | 29.4 |
37
+ |Qwen3-32B| 32B | 68.2 | 43.0 | 53.6 | 59.5 | 31.8 | 22.9 | 68.6 | 28.6 |
38
+ |Gemma3-27B-IT| 27B | 67.1 | 35.6 | 45.3 | 59.9 | 33.3 | 23.1 | 65.5 | 30.9 |
39
+ |Mistral-Small-3.2| 24B | 68.1 | 37.5 | 59.9 | 61.9 | 33.4 | 28.1 | 69.5 | 32.2 |
40
+ |GroveMoE-Inst|3.14~3.28B | <font color=#FBD98D>**72.8**</font> | <font color=#FBD98D>**47.7**</font> | <font color=#FBD98D>**61.3**</font> |<font color=#FBD98D>**71.2**</font> |<font color=#FBD98D>**43.5**</font> | <font color=#FBD98D>**44.4**</font> |<font color=#FBD98D>**74.5**</font> | <font color=#FBD98D>**34.6**</font> |
41
+
42
+ We bold the top1 scores separately for all models. More details are reported in our [technical report](https://arxiv.org/abs/2508.07785).
43
+
44
+ ## Run GroveMoE
45
+
46
+ ### 🤗 Transformers Quick Start
47
+ Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library.
48
+
49
+ ```sh
50
+ $ pip install transformers==4.51.3
51
+ ```
52
+
53
+ Then, copy the snippet from the section that is relevant for your use case.
54
+ ```python
55
+ from transformers import AutoModelForCausalLM, AutoTokenizer
56
+ model_name = "inclusionAI/GroveMoE-Inst"
57
+ # load the tokenizer and the model
58
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
59
+ model = AutoModelForCausalLM.from_pretrained(
60
+ model_name,
61
+ torch_dtype="auto",
62
+ device_map="auto"
63
+ )
64
+ # prepare the model input
65
+ prompt = "Give me a short introduction to large language model."
66
+ messages = [
67
+ {"role": "user", "content": prompt}
68
+ ]
69
+ text = tokenizer.apply_chat_template(
70
+ messages,
71
+ tokenize=False,
72
+ add_generation_prompt=True,
73
+ )
74
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
75
+ # conduct text completion
76
+ generated_ids = model.generate(
77
+ **model_inputs,
78
+ max_new_tokens=16384
79
+ )
80
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
81
+ content = tokenizer.decode(output_ids, skip_special_tokens=True)
82
+ print("content:", content)
83
+ ```
84
+
85
+ ### 🚀 SGLang Quick Start
86
+ For SGLang, you can follow the steps below to deploy:
87
+
88
+ 1️⃣ Install Dependencies
89
+
90
+ First, clone the repository:
91
+ ```shell
92
+ git clone https://github.com/inclusionAI/GroveMoE.git
93
+ ```
94
+ Then, install Transformers:
95
+ ```shell
96
+ cd src/transformers-4.51.3
97
+ pip install .
98
+ ```
99
+ Next, install SGLang:
100
+ ```shell
101
+ cd src/sglang-0.4.6.post5
102
+ pip install .
103
+ ```
104
+
105
+ 2️⃣ Launch the Server
106
+
107
+ Run the following command to start SGLang:
108
+ ```shell
109
+ python -m sglang.launch_server \
110
+ --model-path inclusionAI/GroveMoE-Inst \
111
+ --port 30000 \
112
+ --context-length 32768
113
+ ```
114
+
115
+ 3️⃣ Access the API
116
+
117
+ Once started, the OpenAI-compatible API will be available at `http://localhost:30000/v1`.
118
+
119
+ Test it with curl:
120
+ ```shell
121
+ curl http://localhost:30000/v1/chat/completions \
122
+ -H "Content-Type: application/json" \
123
+ -d '{
124
+ "model": "inclusionAI/GroveMoE-Inst",
125
+ "messages": [{"role": "user", "content": "Hello, SGLang!"}]
126
+ }'
127
+ ```
128
+
129
+ ### llama.cpp
130
+
131
+ Thanks @CISCai, support for llama.cpp can be found in the implementation at https://github.com/ggml-org/llama.cpp/pull/15510.
132
+
133
+ ## Best Practices for Model Configuration
134
+ To achieve optimal performance, we recommend the following settings:
135
+
136
+ 1. **Sampling Parameters**:
137
+ - We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. (⚠️ For benchmarking scenarios requiring sampling (e.g., AIME), these parameters must be explicitly configured.)
138
+
139
+ 2. **Adequate Output Length**: Set output length to 16,384 tokens for general use cases to accommodate complex reasoning tasks in instruct models.
140
+
141
+ 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
142
+ - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
143
+ - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
144
+
145
+
146
+
147
+
148
+ ## Citation
149
+ ```bibtex
150
+ @article{GroveMoE,
151
+ title = {GroveMoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts},
152
+ author = {Wu, Haoyuan and Chen, Haoxing and Chen, Xiaodong and Zhou, Zhanchao and Chen, Tieyuan and Zhuang, Yihong and Lu, Guoshan and Zhao, Junbo and Liu, Lin and Huang, Zenan and Lan, Zhenzhong and Yu, Bei and Li, Jianguo},
153
+ journal = {arXiv preprint arXiv:2508.07785},
154
+ year = {2025}
155
+ }
156
+ ```