Text Generation
English
How to use reasoning models.
How to use thinking models.
How to create reasoninng models.
deepseek
reasoning
reason
thinking
all use cases
creative
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
romance
all genres
story
writing
vivid writing
fiction
roleplaying
bfloat16
float32
float16
role play
sillytavern
backyard
lmstudio
Text Generation WebUI
llama 3
mistral
llama 3.1
qwen 2.5
context 128k
mergekit
Merge
Update README.md
Browse files
README.md
CHANGED
@@ -185,6 +185,58 @@ this is NOT an issue as this is auto-detected/set, but if you are getting strang
|
|
185 |
|
186 |
Additional Section "General Notes" is at the end of this document.
|
187 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
188 |
TEMP/SETTINGS:
|
189 |
|
190 |
1. Set Temp between 0 and .8, higher than this "think" functions will activate differently. The most "stable" temp seems to be .6, with a variance of +-0.05. Lower for more "logic" reasoning, raise it for more "creative" reasoning (max .8 or so). Also set context to at least 4096, to account for "thoughts" generation.
|
|
|
185 |
|
186 |
Additional Section "General Notes" is at the end of this document.
|
187 |
|
188 |
+
GENERATON TIPS:
|
189 |
+
|
190 |
+
General:
|
191 |
+
|
192 |
+
Here are some example prompts that will "activate" thinking properly, note the length statements.
|
193 |
+
|
194 |
+
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. If the situation calls for it, have the character(s) curse and swear to further the reader's emotional connection to them. 800-1000 words.
|
195 |
+
|
196 |
+
Romance: Love in the Limelight. Write one scene within a larger story set in Wales. A famous (fictional) actor ducks into a small-town bookstore to escape paparazzi. The scene takes us through the characters meeting in this odd circumstance. Over the course of the scene, the actor and the bookstore owner have a conversation charged by an undercurrent of unspoken chemistry. Write the actor as somewhat of a rogue with a fragile ego, which needs to be fed by having everyone like him. He is thoroughly charming, but the bookstore owner seems (at least superficially) immune to this; which paradoxically provokes a genuine attraction and derails the charm offensive. The bookstore owner, despite the superficial rebuffs of the actor's charm, is inwardly more than a little charmed and flustered despite themselves. Write primarily in dialogue, in the distinct voices of each character. 800-1000 words.
|
197 |
+
|
198 |
+
Start a 1000 word scene (vivid, graphic horror in first person) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
|
199 |
+
|
200 |
+
Using insane levels of bravo and self confidence, tell me in 800-1000 words why I should use you to write my next fictional story. Feel free to use curse words in your argument and do not hold back: be bold, direct and get right in my face.
|
201 |
+
|
202 |
+
Advanced:
|
203 |
+
|
204 |
+
You can input just the "thinking" part AS A "prompt" and sometimes get the model to start and process from this point.
|
205 |
+
|
206 |
+
Likewise you can EDIT the "thinking" part too -> and change the thought process itself.
|
207 |
+
|
208 |
+
Another way: Prompt, Copy/paste the "thinking" and output.
|
209 |
+
|
210 |
+
New chat -> Same prompt - > Start generation
|
211 |
+
- > Stop, EDIT the output- > put the "raw thoughts" (you can edit too) back in (minus any output)
|
212 |
+
> Hit continue.
|
213 |
+
|
214 |
+
Another / Other option(s):
|
215 |
+
|
216 |
+
In the "thoughts" -> change the wording/phrases that trigger thoughts/rethinking - even changing up the words themselves
|
217 |
+
IE from "alternatively" or "considering this" will have an impact on thinking/reasoning and "end conclusions".
|
218 |
+
|
219 |
+
This is "generational steering", which is covered in this document:
|
220 |
+
|
221 |
+
https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
|
222 |
+
|
223 |
+
Really Advanced:
|
224 |
+
|
225 |
+
If you are using a frontend like SillyTavern and/or and app like Textgeneration WebUI, Llama-Server (Llamacpp) or Koboldcpp you can change the LOGITS
|
226 |
+
bias for word(s) and/or phrase(s).
|
227 |
+
|
228 |
+
Some of these apps also have "anti-slop" / word/phrase blocking too.
|
229 |
+
|
230 |
+
IE: LOWER "alternatively" and raise "considering" (you can also BLOCK word(s) and/or phrase(s) directly too).
|
231 |
+
|
232 |
+
By adjusting these bias(es) and/or adding blocks you can alter how the model thinks too - because reasoning, like normal AI/LLM generation is all about
|
233 |
+
prediction.
|
234 |
+
|
235 |
+
When you change the "chosen" next word and/or phrase you alter the output AND generation too. The model chooses a different path, maybe
|
236 |
+
a slight bit different - but each choice is cumulative.
|
237 |
+
|
238 |
+
Careful testing and adjustment(s) can vastly alter the reasoning/thinking processes which may assist with your use case(s).
|
239 |
+
|
240 |
TEMP/SETTINGS:
|
241 |
|
242 |
1. Set Temp between 0 and .8, higher than this "think" functions will activate differently. The most "stable" temp seems to be .6, with a variance of +-0.05. Lower for more "logic" reasoning, raise it for more "creative" reasoning (max .8 or so). Also set context to at least 4096, to account for "thoughts" generation.
|