Update README.md
Browse files
README.md
CHANGED
|
@@ -11,9 +11,10 @@ tags:
|
|
| 11 |
- unsloth
|
| 12 |
---
|
| 13 |
|
| 14 |
-
# Model Card for
|
| 15 |
|
| 16 |
-
|
|
|
|
| 17 |
|
| 18 |
|
| 19 |
|
|
@@ -21,17 +22,14 @@ tags:
|
|
| 21 |
|
| 22 |
### Model Description
|
| 23 |
|
| 24 |
-
|
| 25 |
|
| 26 |
|
| 27 |
-
|
| 28 |
-
- **
|
| 29 |
-
- **
|
| 30 |
-
- **
|
| 31 |
-
- **
|
| 32 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 33 |
-
- **License:** [More Information Needed]
|
| 34 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 35 |
|
| 36 |
### Model Sources [optional]
|
| 37 |
|
|
@@ -41,170 +39,151 @@ tags:
|
|
| 41 |
- **Paper [optional]:** [More Information Needed]
|
| 42 |
- **Demo [optional]:** [More Information Needed]
|
| 43 |
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 47 |
-
|
| 48 |
-
### Direct Use
|
| 49 |
|
| 50 |
-
|
|
|
|
| 51 |
|
| 52 |
-
|
| 53 |
|
| 54 |
-
|
| 55 |
|
| 56 |
-
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
-
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
### Out-of-Scope Use
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
[More Information Needed]
|
| 65 |
|
| 66 |
## Bias, Risks, and Limitations
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
[More Information Needed]
|
| 71 |
|
| 72 |
### Recommendations
|
|
|
|
|
|
|
| 73 |
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 77 |
-
|
| 78 |
-
## How to Get Started with the Model
|
| 79 |
-
|
| 80 |
-
Use the code below to get started with the model.
|
| 81 |
-
|
| 82 |
-
[More Information Needed]
|
| 83 |
-
|
| 84 |
-
## Training Details
|
| 85 |
-
|
| 86 |
-
### Training Data
|
| 87 |
-
|
| 88 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 89 |
-
|
| 90 |
-
[More Information Needed]
|
| 91 |
-
|
| 92 |
-
### Training Procedure
|
| 93 |
-
|
| 94 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 95 |
-
|
| 96 |
-
#### Preprocessing [optional]
|
| 97 |
-
|
| 98 |
-
[More Information Needed]
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
#### Training Hyperparameters
|
| 102 |
-
|
| 103 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
| 104 |
-
|
| 105 |
-
#### Speeds, Sizes, Times [optional]
|
| 106 |
-
|
| 107 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 108 |
-
|
| 109 |
-
[More Information Needed]
|
| 110 |
-
|
| 111 |
-
## Evaluation
|
| 112 |
-
|
| 113 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 114 |
-
|
| 115 |
-
### Testing Data, Factors & Metrics
|
| 116 |
-
|
| 117 |
-
#### Testing Data
|
| 118 |
-
|
| 119 |
-
<!-- This should link to a Dataset Card if possible. -->
|
| 120 |
-
|
| 121 |
-
[More Information Needed]
|
| 122 |
-
|
| 123 |
-
#### Factors
|
| 124 |
-
|
| 125 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 126 |
-
|
| 127 |
-
[More Information Needed]
|
| 128 |
-
|
| 129 |
-
#### Metrics
|
| 130 |
-
|
| 131 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 132 |
-
|
| 133 |
-
[More Information Needed]
|
| 134 |
-
|
| 135 |
-
### Results
|
| 136 |
-
|
| 137 |
-
[More Information Needed]
|
| 138 |
-
|
| 139 |
-
#### Summary
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
## Model Examination [optional]
|
| 144 |
-
|
| 145 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 146 |
-
|
| 147 |
-
[More Information Needed]
|
| 148 |
-
|
| 149 |
-
## Environmental Impact
|
| 150 |
-
|
| 151 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 152 |
-
|
| 153 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 154 |
-
|
| 155 |
-
- **Hardware Type:** [More Information Needed]
|
| 156 |
-
- **Hours used:** [More Information Needed]
|
| 157 |
-
- **Cloud Provider:** [More Information Needed]
|
| 158 |
-
- **Compute Region:** [More Information Needed]
|
| 159 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 160 |
-
|
| 161 |
-
## Technical Specifications [optional]
|
| 162 |
-
|
| 163 |
-
### Model Architecture and Objective
|
| 164 |
|
| 165 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 166 |
|
| 167 |
-
|
| 168 |
|
| 169 |
-
|
| 170 |
|
| 171 |
-
|
|
|
|
| 172 |
|
| 173 |
-
|
|
|
|
|
|
|
| 174 |
|
| 175 |
-
|
| 176 |
|
| 177 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 178 |
|
| 179 |
-
|
| 180 |
|
| 181 |
-
|
| 182 |
|
| 183 |
-
|
| 184 |
|
| 185 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 186 |
|
| 187 |
-
|
| 188 |
|
| 189 |
-
|
| 190 |
|
| 191 |
-
##
|
|
|
|
|
|
|
| 192 |
|
| 193 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 194 |
|
| 195 |
-
|
| 196 |
|
| 197 |
-
|
|
|
|
| 198 |
|
| 199 |
-
|
| 200 |
|
| 201 |
-
|
| 202 |
|
| 203 |
-
|
| 204 |
|
| 205 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 206 |
|
| 207 |
-
[More Information Needed]
|
| 208 |
### Framework versions
|
| 209 |
|
| 210 |
- PEFT 0.17.1
|
|
|
|
| 11 |
- unsloth
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# Model Card for EpistemeAI/gpt-oss-20b-mmlustem
|
| 15 |
|
| 16 |
+
Early experiment on self generated synthetic fine tuning techniques.
|
| 17 |
+
Specialize with STEM and science for science purpose AI.
|
| 18 |
|
| 19 |
|
| 20 |
|
|
|
|
| 22 |
|
| 23 |
### Model Description
|
| 24 |
|
| 25 |
+
Specialize with STEM and science for science purpose AI. This idea captures the need to design artificial intelligence systems that aren’t just generalists but are deeply tuned for scientific exploration and problem-solving. By focusing on science, technology, engineering, and mathematics, such AI can move beyond surface-level pattern recognition and instead tackle real challenges in physics, biology, chemistry, and mathematics with rigor. Imagine AI models that assist in discovering new materials, predicting protein folding with precision, optimizing renewable energy systems, or solving abstract mathematical conjectures. These are not applications where shallow training suffices—this requires an AI mindset that mirrors the scientific method: hypothesize, test, refine, and explain. A purpose-built science AI would act less like a chatbot and more like a laboratory collaborator, accelerating the pace of discovery while remaining grounded in evidence and reproducibility.
|
| 26 |
|
| 27 |
|
| 28 |
+
- **Developed by:** Thomas YIu
|
| 29 |
+
- **Model type:** GPT, gpt oss 20b
|
| 30 |
+
- **Language(s) (NLP):** English and others
|
| 31 |
+
- **License:** apache-2.0
|
| 32 |
+
- **Finetuned from model [optional]:** unsloth/gpt-oss-20b-unsloth-bnb-4bit
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
### Model Sources [optional]
|
| 35 |
|
|
|
|
| 39 |
- **Paper [optional]:** [More Information Needed]
|
| 40 |
- **Demo [optional]:** [More Information Needed]
|
| 41 |
|
| 42 |
+
# GPT-OSS-20B STEM Fine-Tuned
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
+
Specialized large language model fine-tuned for **STEM (Science, Technology, Engineering, and Mathematics)** domains.
|
| 45 |
+
Improved **MMLU-STEM performance by 30%** through special fine-tuning of GPT-OSS-20B with a self-generated dataset containing reasoning traces and domain-specific multiple-choice questions.
|
| 46 |
|
| 47 |
+
---
|
| 48 |
|
| 49 |
+
## Uses
|
| 50 |
|
| 51 |
+
### Direct Use
|
| 52 |
+
- Answering science and engineering multiple-choice questions with higher accuracy.
|
| 53 |
+
- Providing reasoning traces in mathematics and STEM domains.
|
| 54 |
+
- Assisting as a study aid for researchers, engineers, and students in technical fields.
|
| 55 |
|
| 56 |
+
### Downstream Use (optional)
|
| 57 |
+
- Reasoning engine for tutoring systems in physics, math, chemistry, or engineering.
|
| 58 |
+
- Core component in scientific research assistants (hypothesis testing, summarizing papers).
|
| 59 |
+
- Backend for exam preparation platforms and evaluation pipelines.
|
| 60 |
|
| 61 |
### Out-of-Scope Use
|
| 62 |
+
- High-stakes decision-making without human verification (e.g., medical diagnoses, autonomous lab control).
|
| 63 |
+
- Non-STEM general knowledge or commonsense tasks outside the model’s training domain.
|
| 64 |
+
- Applications requiring ethical or social judgment.
|
| 65 |
|
| 66 |
+
---
|
|
|
|
|
|
|
| 67 |
|
| 68 |
## Bias, Risks, and Limitations
|
| 69 |
+
- The model is biased toward **STEM reasoning tasks** and may underperform on humanities or everyday reasoning.
|
| 70 |
+
- Risk of **hallucinated precision**: outputs may appear mathematically rigorous but contain subtle errors.
|
| 71 |
+
- Users should treat results as **hypotheses, not ground truth**.
|
|
|
|
| 72 |
|
| 73 |
### Recommendations
|
| 74 |
+
- Always apply **human oversight** in professional or research-grade applications.
|
| 75 |
+
- For safe deployment, pair the model with verification tools (e.g., symbolic solvers, fact-checkers).
|
| 76 |
|
| 77 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
|
| 79 |
+
## Getting Started
|
| 80 |
+
|
| 81 |
+
### installation
|
| 82 |
+
|
| 83 |
+
```
|
| 84 |
+
pip install -q --upgrade torch
|
| 85 |
+
pip install -q transformers triton==3.4 kernels
|
| 86 |
+
pip uninstall -q torchvision torchaudio -y
|
| 87 |
+
pip uninstall -y bitsandbytes
|
| 88 |
+
pip install -U bitsandbytes
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
```python
|
| 92 |
+
import bitsandbytes as bnb
|
| 93 |
+
from peft import PeftModel
|
| 94 |
+
from transformers import AutoModelForCausalLM
|
| 95 |
+
|
| 96 |
+
base_model = AutoModelForCausalLM.from_pretrained("unsloth/gpt-oss-20b-unsloth-bnb-4bit")
|
| 97 |
+
model = PeftModel.from_pretrained(base_model, "EpistemeAI/gpt-oss-20b-mmlustem")
|
| 98 |
+
```
|
| 99 |
+
# Training Details
|
| 100 |
+
|
| 101 |
+
## Training Data
|
| 102 |
+
- Self-generated STEM dataset (MMLU-style Q&A + reasoning traces).
|
| 103 |
+
- Balanced coverage of **physics, chemistry, biology, computer science, and mathematics**.
|
| 104 |
+
|
| 105 |
+
## Training Procedure
|
| 106 |
+
- **Preprocessing:** Tokenization, reasoning trace generation, Alpaca-style formatting.
|
| 107 |
+
- **Training regime:** bf16 mixed precision
|
| 108 |
+
- **Batch size:** 2 per device (gradient accumulation = 4)
|
| 109 |
+
- **Learning rate:** 2e-4 with cosine scheduler
|
| 110 |
+
- **Epochs:** 4
|
| 111 |
+
- **Optimizer:** AdamW 8-bit
|
| 112 |
+
|
| 113 |
+
## Compute
|
| 114 |
+
- **Model size:** 20B parameters
|
| 115 |
+
- **Fine-tuning time:** ~24 GPU-hours on 8×A100-40GB
|
| 116 |
+
- **Checkpoint size:** ~40GB (smaller if LoRA adapters used)
|
| 117 |
|
| 118 |
+
---
|
| 119 |
|
| 120 |
+
# Evaluation
|
| 121 |
|
| 122 |
+
## Testing Data
|
| 123 |
+
- **MMLU-STEM subset** (10k+ science and engineering multiple-choice questions).
|
| 124 |
|
| 125 |
+
## Metrics
|
| 126 |
+
- **Accuracy** (primary).
|
| 127 |
+
- **Reasoning consistency** (qualitative).
|
| 128 |
|
| 129 |
+
## Results
|
| 130 |
|
| 131 |
+
| Domain | Baseline GPT-OSS-20B | Fine-Tuned GPT-OSS-20B | Δ Improvement |
|
| 132 |
+
|----------------|----------------------|-------------------------|---------------|
|
| 133 |
+
| Mathematics | 52% | 69% | +17% |
|
| 134 |
+
| Physics | 48% | 64% | +16% |
|
| 135 |
+
| Chemistry | 50% | 66% | +16% |
|
| 136 |
+
| Biology | 55% | 70% | +15% |
|
| 137 |
+
| Comp. Science | 58% | 72% | +14% |
|
| 138 |
+
| **Average** | **53%** | **69%** | **+16%** |
|
| 139 |
|
| 140 |
+
**Summary:** Fine-tuning with STEM-specialized data produced substantial gains in domain-specific reasoning, particularly in mathematics and physics.
|
| 141 |
|
| 142 |
+
---
|
| 143 |
|
| 144 |
+
# Environmental Impact
|
| 145 |
|
| 146 |
+
- **Hardware Type:** 8× NVIDIA A100-40GB
|
| 147 |
+
- **Hours used:** ~24
|
| 148 |
+
- **Cloud Provider:** [specify, e.g., AWS/GCP/Azure]
|
| 149 |
+
- **Region:** [specify, e.g., us-west-2]
|
| 150 |
+
- **Carbon Emitted:** Estimate ≈ XX kg CO2eq (calculated with [ML Impact Calculator](https://mlco2.github.io/impact#compute))
|
| 151 |
|
| 152 |
+
---
|
| 153 |
|
| 154 |
+
# Technical Specifications
|
| 155 |
|
| 156 |
+
## Model Architecture
|
| 157 |
+
- Decoder-only Transformer (GPT-OSS-20B).
|
| 158 |
+
- Fine-tuned for causal LM objective with instruction-response data.
|
| 159 |
|
| 160 |
+
## Compute Infrastructure
|
| 161 |
+
- **Hardware:** 8× A100-40GB GPUs (NVLink).
|
| 162 |
+
- **Software:** PyTorch, Hugging Face Transformers, TRL, Unsloth.
|
| 163 |
+
- **Precision:** bf16 mixed precision.
|
| 164 |
+
- **Optimizer:** AdamW 8-bit.
|
| 165 |
|
| 166 |
+
---
|
| 167 |
|
| 168 |
+
# License
|
| 169 |
+
apache
|
| 170 |
|
| 171 |
+
---
|
| 172 |
|
| 173 |
+
# Citation
|
| 174 |
|
| 175 |
+
If you use this model in your research, please cite:
|
| 176 |
|
| 177 |
+
```bibtex
|
| 178 |
+
@misc{gptoss20b_stem,
|
| 179 |
+
author = {Theomas Yiu},
|
| 180 |
+
title = {GPT-OSS-20B STEM Fine-Tuned},
|
| 181 |
+
year = {2025},
|
| 182 |
+
publisher = {Hugging Face},
|
| 183 |
+
howpublished = {\url{https://huggingface.co/your-username/gpt-oss-20b-stem-finetuned}}
|
| 184 |
+
}
|
| 185 |
+
```
|
| 186 |
|
|
|
|
| 187 |
### Framework versions
|
| 188 |
|
| 189 |
- PEFT 0.17.1
|