RekklesAI commited on
Commit
c3caa13
Β·
verified Β·
1 Parent(s): 5920ac4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -21,13 +21,13 @@ language:
21
 
22
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/664589a52d210101d1eac6ad/GeOMgW7RLvZ5PpMY1klCU.png)
23
 
24
- # Mistral-Small-24B-Reasoning
25
 
26
- **Mistral-Small-24B-Reasoning** is a fine-tuned version of [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) that has been enhanced for advanced reasoning and thinking tasks. This model was trained on the high-quality [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset, which contains 114,000 synthetic reasoning examples covering mathematics, science, coding, and complex puzzles.
27
 
28
  ## πŸš€ Model Overview
29
 
30
- Mistral-Small-24B-Reasoning excels at:
31
  - **Step-by-step reasoning** across multiple domains
32
  - **Mathematical problem solving** with detailed explanations
33
  - **Scientific analysis** and conceptual understanding
@@ -76,7 +76,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
76
  import torch
77
 
78
  # Load the model and tokenizer
79
- model_name = "RekklesAI/Mistral-Small-24B-Reasoning"
80
  tokenizer = AutoTokenizer.from_pretrained(model_name)
81
  model = AutoModelForCausalLM.from_pretrained(
82
  model_name,
@@ -189,8 +189,8 @@ MistralForCausalLM(
189
  ## πŸ“ Citation
190
 
191
  ```bibtex
192
- @misc{mistralsmall24breasoning,
193
- title={Mistral-Small-24B-Reasoning: A Reasoning-Enhanced Large Language Model},
194
  author={[Your Name]},
195
  year={2025},
196
  note={Fine-tuned from Mistral-Small-24B-Instruct-2501 using OpenThoughts-114k dataset}
 
21
 
22
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/664589a52d210101d1eac6ad/GeOMgW7RLvZ5PpMY1klCU.png)
23
 
24
+ # LogicFlow-Mistral-Small-24B-Reasoning
25
 
26
+ **LogicFlow-Mistral-Small-24B-Reasoning** is a fine-tuned version of [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) that has been enhanced for advanced reasoning and thinking tasks. This model was trained on the high-quality [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset, which contains 114,000 synthetic reasoning examples covering mathematics, science, coding, and complex puzzles.
27
 
28
  ## πŸš€ Model Overview
29
 
30
+ LogicFlow-Mistral-Small-24B-Reasoning excels at:
31
  - **Step-by-step reasoning** across multiple domains
32
  - **Mathematical problem solving** with detailed explanations
33
  - **Scientific analysis** and conceptual understanding
 
76
  import torch
77
 
78
  # Load the model and tokenizer
79
+ model_name = "RekklesAI/LogicFlow-Mistral-Small-24B-Reasoning"
80
  tokenizer = AutoTokenizer.from_pretrained(model_name)
81
  model = AutoModelForCausalLM.from_pretrained(
82
  model_name,
 
189
  ## πŸ“ Citation
190
 
191
  ```bibtex
192
+ @misc{logicflowmistralsmall24breasoning,
193
+ title={LogicFlow-Mistral-Small-24B-Reasoning: A Reasoning-Enhanced Large Language Model},
194
  author={[Your Name]},
195
  year={2025},
196
  note={Fine-tuned from Mistral-Small-24B-Instruct-2501 using OpenThoughts-114k dataset}