
D_AU - Reasoning Adapters / LORAs -> Any model to reasoning
Lora adapters and methods to turn any model into a reasoning model for multiple types of models including Llama, Mistral, Qwen and more.
- Text Generation • Updated • 11 • 1
DavidAU/Mistral-Small-3-Reasoner-s1-24B-LORA-256-RANK
Text Generation • Updated • 15 • 3DavidAU/LORA-DeepSeek-R1-Distill-Llama-8B-rank-64-BASE-adapter
Text Generation • Updated • 9DavidAU/LORA-DeepSeek-R1-Distill-Llama-8B-rank-128-INSTRUCT-adapter
Text Generation • Updated • 30DavidAU/LORA-DeepSeek-R1-Distill-Llama-8B-rank-128-BASE-adapter
Text Generation • Updated • 25 • 1DavidAU/LORA-DeepHermes-R1-Reasoning-Llama-8B-rank-64-adapter
Text Generation • Updated • 12DavidAU/LORA-DeepHermes-R1-Reasoning-Llama-8B-rank-128-adapter
Text Generation • Updated • 55 • 1DavidAU/LORA-DeepHermes-R1-Reasoning-Llama-8B-rank-256-adapter
Text Generation • Updated • 36 • 1DavidAU/LORA-DeepHermes-R1-Reasoning-Llama-8B-rank-512-adapter
Text Generation • Updated • 8 • 1DavidAU/LORA-DeepHermes-R1-Reasoning-Llama-8B-rank-32-adapter
Text Generation • Updated • 12DavidAU/How-To-Use-Reasoning-Thinking-Models-and-Create-Them
Text Generation • Updated • 8DavidAU/DeepSeek-R1-Distill-Qwen-14B-LORA-64-RANK
Text Generation • Updated • 9DavidAU/DeepSeek-R1-Distill-Qwen-14B-LORA-128-RANK
Text Generation • Updated • 10DavidAU/DeepSeek-R1-Distill-Qwen-14B-LORA-256-RANK
Text Generation • Updated • 58DavidAU/Mistral-Nemo-12B-LORA-64-RANK
Text Generation • Updated • 10DavidAU/Mistral-Nemo-12B-LORA-32-Rank
Text Generation • Updated • 28DavidAU/Mistral-Nemo-12B-LORA-128-RANK
Text Generation • Updated • 29DavidAU/Mistral-Nemo-12B-LORA-256-RANK
Text Generation • Updated • 15DavidAU/Mistral-Small-3-Reasoner-s1-24B-LORA-64-RANK
Text Generation • Updated • 12DavidAU/Mistral-Small-3-Reasoner-s1-24B-LORA-128-RANK
Text Generation • Updated • 8
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
Updated • 121Note Document detailing all parameters, settings, samplers and advanced samplers to use not only my models to their maximum potential - but all models (and quants) online (regardless of the repo) to their maximum potential. Included quick start and detailed notes, include AI / LLM apps and other critical information and references too. A must read if you are using any AI/LLM right now.
DavidAU/AI_Autocorrect__Auto-Creative-Enhancement__Auto-Low-Quant-Optimization__gguf-exl2-hqq-SOFTWARE
Text Generation • Updated • 52Note SOFTWARE patch (by me) for Silly Tavern (front end to connect to multiple AI apps / connect to AIs- like Koboldcpp, Lmstudio, Text Gen Web UI and other APIs) to control and improve output generation of ANY AI model. Also designed to control/wrangle some of my more "creative" models and make them perform perfectly with little to no parameter/samplers adjustments too.