Update README.md
Browse files
README.md
CHANGED
@@ -21,13 +21,13 @@ language:
|
|
21 |
|
22 |

|
23 |
|
24 |
-
# Mistral-Small-24B-Reasoning
|
25 |
|
26 |
-
**Mistral-Small-24B-Reasoning** is a fine-tuned version of [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) that has been enhanced for advanced reasoning and thinking tasks. This model was trained on the high-quality [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset, which contains 114,000 synthetic reasoning examples covering mathematics, science, coding, and complex puzzles.
|
27 |
|
28 |
## π Model Overview
|
29 |
|
30 |
-
Mistral-Small-24B-Reasoning excels at:
|
31 |
- **Step-by-step reasoning** across multiple domains
|
32 |
- **Mathematical problem solving** with detailed explanations
|
33 |
- **Scientific analysis** and conceptual understanding
|
@@ -76,7 +76,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
76 |
import torch
|
77 |
|
78 |
# Load the model and tokenizer
|
79 |
-
model_name = "RekklesAI/Mistral-Small-24B-Reasoning"
|
80 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
81 |
model = AutoModelForCausalLM.from_pretrained(
|
82 |
model_name,
|
@@ -189,8 +189,8 @@ MistralForCausalLM(
|
|
189 |
## π Citation
|
190 |
|
191 |
```bibtex
|
192 |
-
@misc{
|
193 |
-
title={Mistral-Small-24B-Reasoning: A Reasoning-Enhanced Large Language Model},
|
194 |
author={[Your Name]},
|
195 |
year={2025},
|
196 |
note={Fine-tuned from Mistral-Small-24B-Instruct-2501 using OpenThoughts-114k dataset}
|
|
|
21 |
|
22 |

|
23 |
|
24 |
+
# LogicFlow-Mistral-Small-24B-Reasoning
|
25 |
|
26 |
+
**LogicFlow-Mistral-Small-24B-Reasoning** is a fine-tuned version of [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) that has been enhanced for advanced reasoning and thinking tasks. This model was trained on the high-quality [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset, which contains 114,000 synthetic reasoning examples covering mathematics, science, coding, and complex puzzles.
|
27 |
|
28 |
## π Model Overview
|
29 |
|
30 |
+
LogicFlow-Mistral-Small-24B-Reasoning excels at:
|
31 |
- **Step-by-step reasoning** across multiple domains
|
32 |
- **Mathematical problem solving** with detailed explanations
|
33 |
- **Scientific analysis** and conceptual understanding
|
|
|
76 |
import torch
|
77 |
|
78 |
# Load the model and tokenizer
|
79 |
+
model_name = "RekklesAI/LogicFlow-Mistral-Small-24B-Reasoning"
|
80 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
81 |
model = AutoModelForCausalLM.from_pretrained(
|
82 |
model_name,
|
|
|
189 |
## π Citation
|
190 |
|
191 |
```bibtex
|
192 |
+
@misc{logicflowmistralsmall24breasoning,
|
193 |
+
title={LogicFlow-Mistral-Small-24B-Reasoning: A Reasoning-Enhanced Large Language Model},
|
194 |
author={[Your Name]},
|
195 |
year={2025},
|
196 |
note={Fine-tuned from Mistral-Small-24B-Instruct-2501 using OpenThoughts-114k dataset}
|