Fortinet LoRA GGUF (Q4_K_M) Author: NOYOUllm2 License: MIT Model type: DeepSeek-LM 7B, fine-tuned with Fortinet CLI/troubleshooting data, merged and quantized to GGUF (Q4_K_M) Quantization: Q4_K_M Format: GGUF (for llama.cpp, LM Studio, and compatible tools) Status: Experimental / Community

Model Description This model is a Fortinet CLI and troubleshooting assistant, built by fine-tuning DeepSeek-LM 7B on a custom dataset of FortiGate commands, log messages, and admin Q&A pairs.

Base model: DeepSeek-LM 7B

Fine-tuning: Axolotl (QLoRA/LoRA)

Merge: Merged with LoRA adapter and exported as Hugging Face format

Quantization: Q4_K_M using llama.cpp

Format: GGUF for use with llama.cpp, LM Studio, Ollama, OpenWebUI, and other GGUF-compatible tools

Intended Use Network admins or security engineers working with FortiGate firewalls and Fortinet devices.

Quick lookup of CLI commands, log field meanings, troubleshooting steps, and configuration advice.

Can be run locally, offline, on consumer hardware (with llama.cpp or similar).

Example Usage Prompt:

kotlin Copy What is the FortiGate CLI command to show interface status? Response:

pgsql Copy To show interface status on FortiGate, use: show system interface Prompt:

matlab Copy What does the policyid field mean in FortiOS logs? Response:

nginx Copy The policyid field indicates the firewall policy ID that matched and processed the traffic. Files Included fortinet-lora-q4k.gguf — The quantized GGUF model (Q4_K_M)

tokenizer.json and tokenizer_config.json

config.json (optional, for reference)

How to Use With llama.cpp:

./main -m fortinet-lora-q4k.gguf

With LM Studio: Simply select and load the fortinet-lora-q4k.gguf file.

Limitations & Warnings This model is not affiliated with or endorsed by Fortinet, Inc.

Responses are based on training data and may not reflect the latest FortiOS versions or official best practices.

For critical configurations, always consult official Fortinet documentation.

Citation If you use this model or dataset, please cite:

@model{noyoullm2_fortinet-lora-gguf, author = {NOYOUllm2}, title = {Fortinet LoRA GGUF (Q4_K_M)}, year = {2024}, howpublished = {Hugging Face: https://huggingface.co/NOYOUllm2/fortinet-lora-gguf} } Questions? Open an issue or reach out via Hugging Face.

This model was created with ❤️ using open-source tools, for the Fortinet admin and cybersecurity community.

Downloads last month
37
GGUF
Model size
6.91B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NOYOUllm2/fortinet-lora-gguf

Quantized
(27)
this model