Evolve Mistral: Fine-Tuned Mistral-7B-Instruct for AI CRUD & Code Generation
This is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2
, adapted specifically for code generation, schema-driven CRUD reasoning, and full-stack boilerplate automation. It powers the AI agent layer behind the Self-Revolve project.
Project Context: Self-Revolve
Evolve Mistral is a fine-tuned open-source model purpose-built for powering code generation in the Self-Revolve project.
“Instantly generate full-stack admin panels, APIs, and UIs from your database schema—powered by AI agents & LLMs.”
Key capabilities:
- Auto-generates CRUD APIs from DB schemas
- Generates React/MUI admin interfaces
- Supports SQL & NoSQL databases
- Works without OpenAI keys
- Open-source & self-hostable
Dataset
kramster/crud-code-tests
A high-quality Alpaca-style dataset focused on database and backend code generation. Each example contains:
instruction
input
output
Training Setup
Detail | Value |
---|---|
Base model | mistralai/Mistral-7B-Instruct-v0.2 |
Dataset | crud-code-tests (Alpaca-style) |
LoRA Config | r=32, alpha=16 |
Framework | Axolotl + DeepSpeed + LoRA |
Epochs | ~3.94 |
Steps | 51 |
Precision | bfloat16 |
GPU | NVIDIA H100 80GB |
Duration | ~10m |
Train Loss | 0.0909 |
Eval Loss | 0.1012 |
FLOPs | ~347.6 trillion |
Evaluation Summary
- Eval runtime: 2.84s
- Samples/sec: 2.11
- Steps/sec: 1.05
- Final learning rate: 2.93e-7
- Gradient norm: 0.064
Example Usage (VLLM)
vllm-api-server \
--model kramster/evolve-mistral \
--max-model-len 64000 \
--rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' \
--no-enable-prefix-caching
- Downloads last month
- 12
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for kramster/evolve-mistral
Base model
mistralai/Mistral-7B-Instruct-v0.2