MedQuAD LoRA r=4

Configuraci贸n

  • Base: mistralai/Mistral-7B-Instruct-v0.3
  • LoRA r: 4
  • M贸dulos: q_proj, k_proj, v_proj
  • 4-bit NF4
  • Early Stopping: patience=3

Entrenamiento

Training logs (manual, Epoch estimado, 150 max steps):

Step Epoch Training Loss Validation Loss
50 0.023 0.710800 0.791143
100 0.046 0.672000 0.788682
150 0.070 0.648600 0.774913

Evaluaci贸n

  • BertScore F1 (sample test 50): 0.8451

Uso

from peft import PeftModel
from transformers import AutoModelForCausalLM
base = AutoModelForCausalLM.from_pretrained('mistralai/Mistral-7B-Instruct-v0.3', load_in_4bit=True)
model = PeftModel.from_pretrained(base, 'CHF0101/medquad-lora-r4-best-v2')
Downloads last month
39
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support