🤓LIA (Llama 3.1 8B — Finetuned with Unsloth)
A finetuned Llama 3.1 8B model specialized for Local Intelligent Agent (LIA) intent parsing and local file/system actions. The model converts user requests into compact JSON that LIA executes safely.
For download LIA and more detailed information, refer to the link below.
Overview🧐
- Base: Llama 3.1 8B Instruct
- Method: Unsloth SFT (LoRA), merged for deployment
- Dataset: Custom, user-created (intent pairs)
- Output: Raw JSON only (no markdown), with keys: command_type, parameters, reasoning
- Primary goal: Deterministic intent parsing for desktop automation
😎Purpose and Tasks
- Parse file/folder operations: open, list, create, write, read, delete, copy, move, rename
- Interpret patterns (e.g., *.pdf) and paths
- Safe fallback to
chatintent when not a file operation - Produce stable JSON without code fences or extra prose
Example output:
{
"command_type": "list_files",
"parameters": {"path": "Downloads", "pattern": "*.pdf"},
"reasoning": "User wants to list PDFs in Downloads"
}
😲Differences vs Original Llama 3.1 8B
- More consistent JSON-only answers for intent parsing
- Lower hallucination rate on file/command names
- Better handling of short/telegraphic commands
- Tuned for low temperature decoding (0.1–0.3)
Training (Unsloth)
- LoRA-based SFT on user dataset (input → JSON output pairs)
- Chat template aligned with Llama 3.1
- System prompt stresses: “Return raw JSON only”
- Adapters merged to a full checkpoint for serving
Quick start (Ollama):
ollama run hf.co/Yusiko/LIA:Q4_K_M
📃License and Credits
- Base: Meta Llama 3.1 8B Instruct (respect base license)
- Finetuning: Unsloth
- Packaging: Ollama
- LIA is protecting with MIT license
For questions or integration help, open an issue on the repository.
- Downloads last month
- 56
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Yusiko/LIA
Base model
meta-llama/Llama-3.1-8B