Fernando J. Albornoz commited on
Commit
61f1682
·
verified ·
1 Parent(s): 1a5be19

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +120 -8
README.md CHANGED
@@ -1,12 +1,124 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  language:
4
  - en
5
- base_model:
6
- - Qwen/Qwen3-1.7B
7
- library_name: adapter-transformers
8
- tags:
9
- - qwen
10
- - math
11
- - mini
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: Qwen/Qwen3-1.7B
3
+ model_name: Math Mini 1.7B (Preview)
4
+ tags:
5
+ - text-generation
6
+ - text-generation-inference
7
+ - transformers
8
+ - qwen3
9
+ - math
10
+ - enosis-labs
11
+ - math-mini
12
+ - vllm # Tag for vLLM version
13
  license: apache-2.0
14
  language:
15
  - en
16
+ ---
17
+
18
+ # Math Mini 1.7B (Preview)
19
+
20
+ **Math Mini 1.7B (Preview)** is a larger, more capable model developed by **Enosis Labs** as part of the "Mini Series." Building on the foundation of the 0.6B version, this 1.7B model delivers significantly improved performance, deeper reasoning, and greater accuracy in mathematical tasks. It is fine-tuned from the original `Qwen/Qwen3-1.7B` base model (not from Unsloth's pre-adapted versions).
21
+
22
+ ## Philosophy & Capabilities
23
+
24
+ The Mini Series, along with the "Enosis Math" and "Enosis Code" models, incorporates step-by-step reasoning by default, enabling more efficient, clear, and well-founded answers. All models in the Math series have been trained with carefully curated step-by-step problem-solving datasets, resulting in a greater ability to reason and explain solutions in a structured way.
25
+
26
+ **Math Mini 1.7B (Preview)** is optimized for:
27
+
28
+ * **Basic and Intermediate Algebra:** Solving equations, manipulating expressions, and handling more complex algebraic problems.
29
+ * **Arithmetic & Sequential Reasoning:** Calculations and breaking down problems into logical steps, with improved multi-step reasoning.
30
+ * **Elementary & Intermediate Logic:** Applying deduction in mathematical contexts, now with broader coverage.
31
+ * **Competition Problem Solving (Introductory to Intermediate):** Enhanced foundational and competition-style skills, adapted to the increased model scale.
32
+
33
+ Larger models in the "Enosis Math" series address even more advanced topics such as calculus, higher algebra, and olympiad problems. The "Code Mini" and "Enosis Code" series are oriented towards programming and algorithmic tasks, maintaining the same philosophy of explicit and efficient reasoning.
34
+
35
+ This model is a **preview version** and is under continuous improvement and evaluation.
36
+
37
+ ## Quick Start
38
+
39
+ Available in Hugging Face Transformers format and for high-throughput inference servers like vLLM.
40
+
41
+ ### vLLM (Inference Server)
42
+
43
+ Install vLLM:
44
+
45
+ ```bash
46
+ pip install vllm
47
+ ```
48
+
49
+ Start the vLLM server with the model (16-bit version):
50
+
51
+ ```bash
52
+ vllm serve "enosislabs/math-mini-1.7b-preview-16bits"
53
+ ```
54
+
55
+ Call the server using curl:
56
+
57
+ ```bash
58
+ curl -X POST "http://localhost:8000/v1/chat/completions" \
59
+ -H "Content-Type: application/json" \
60
+ --data '{
61
+ "model": "enosislabs/math-mini-1.7b-preview-16bits",
62
+ "messages": [
63
+ {"role": "user", "content": "What is the capital of France?"}
64
+ ]
65
+ }'
66
+ ```
67
+
68
+ ### Transformers (Hugging Face)
69
+
70
+ Use a pipeline as a high-level helper:
71
+
72
+ ```python
73
+ from transformers import pipeline
74
+
75
+ pipe = pipeline("text-generation", model="enosislabs/math-mini-1.7b-preview-16bits")
76
+ messages = [
77
+ {"role": "user", "content": "Who are you?"},
78
+ ]
79
+ pipe(messages)
80
+ ```
81
+
82
+ ## Prompt Format (Qwen3 ChatML)
83
+
84
+ For best results, use the Qwen3 ChatML format. The `tokenizer.apply_chat_template` method handles this automatically.
85
+
86
+ ```text
87
+ <|im_start|>system
88
+ You are a helpful AI assistant. Provide a detailed step-by-step solution.
89
+ <|im_end|>
90
+ <|im_start|>user
91
+ {user_question}
92
+ <|im_end|>
93
+ <|im_start|>assistant
94
+ ```
95
+
96
+ ## Acknowledgements
97
+
98
+ * Fine-tuned from the original `Qwen/Qwen3-1.7B` base model.
99
+ * Training process accelerated and optimized using [Unsloth](https://github.com/unslothai/unsloth) for efficiency, but not using Unsloth's pre-adapted weights.
100
+
101
+ ## Citation
102
+
103
+ If you use this model, please cite:
104
+
105
+ ```bibtex
106
+ @software{enosislabs_math_mini_1.7b_preview_2025,
107
+ author = {{Enosis Labs}},
108
+ title = {{Math Mini 1.7B (Preview)}},
109
+ year = {2025},
110
+ publisher = {Hugging Face},
111
+ version = {0.1-preview},
112
+ url = {https://huggingface.co/enosislabs/math-mini-1.7b-preview-16bits}
113
+ }
114
+ ```
115
+
116
+ <!--
117
+ Key points:
118
+ - Now 1.7B, with improved performance and reasoning over 0.6B.
119
+ - Fine-tuned from the original Qwen3-1.7B, not Unsloth's pre-adapted weights.
120
+ - Emphasizes default activation of step-by-step reasoning across the series.
121
+ - Clear and modern examples for vLLM and Transformers.
122
+ - ChatML prompt is central to the experience.
123
+ - Assumes the repo contains the 1.7B model for both vLLM and Transformers.
124
+ -->