Built with Axolotl

See axolotl config

axolotl version: 0.12.2

# 基础模型配置
base_model: Qwen/Qwen3-4B-Instruct-2507
load_in_4bit: true
bnb_4bit_compute_dtype: bfloat16
bnb_4bit_quant_type: nf4
bnb_4bit_use_double_quant: true

# LoRA配置
adapter: lora
lora_r: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target_modules:
  - q_proj
  - k_proj
  - v_proj
  - o_proj
  - gate_proj
  - up_proj
  - down_proj
lora_target_linear: true
lora_fan_in_fan_out: false

# 数据集
chat_template: qwen3
datasets:
  - path: /workspace/tool_data_1012_89086.json
    type: chat_template
    roles_to_train: ["assistant"]
    field_messages: messages
    message_property_mappings:
      role: role
      content: content

val_set_size: 0.05
output_dir: checkpoints

# 序列长度
sequence_len: 8192
pad_to_sequence_len: true
sample_packing: false
eval_sample_packing: false
group_by_length: true

# 训练参数
num_epochs: 3
micro_batch_size: 6
gradient_accumulation_steps: 4
eval_batch_size: 4

# 优化器
optimizer: adamw_bnb_8bit
lr_scheduler: cosine_with_restarts
cosine_restarts: 2
learning_rate: 1e-4
warmup_ratio: 0.05
weight_decay: 0.01

# 精度
bf16: auto
tf32: true
gradient_checkpointing: true
flash_attention: true

# ========== 关键:保存策略 ==========
save_strategy: steps
eval_strategy: steps
eval_steps: 500  # 每500步评估(约每1/6个epoch,根据数据量调整)
save_steps: 500  # 与eval_steps一致

save_total_limit: 1  # 只保留最优的1个
load_best_model_at_end: true  # 训练结束加载最优
metric_for_best_model: eval_loss  # 用验证集loss
greater_is_better: false  # loss越小越好

logging_steps: 30

# DeepSpeed
deepspeed: zero2.json

# 其他
ddp_timeout: 3600
ddp_find_unused_parameters: false

checkpoints

This model is a fine-tuned version of Qwen/Qwen3-4B-Instruct-2507 on the /workspace/tool_data_1012_89086.json dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0672
  • Memory/max Mem Active(gib): 95.9
  • Memory/max Mem Allocated(gib): 95.9
  • Memory/device Mem Reserved(gib): 124.48

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 6
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 192
  • total_eval_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_warmup_steps: 65
  • training_steps: 1310

Training results

Training Loss Epoch Step Validation Loss Mem Active(gib) Mem Allocated(gib) Mem Reserved(gib)
No log 0 0 1.1993 50.61 50.61 51.0
0.0695 1.1442 500 0.0686 95.9 95.9 124.48
0.0681 2.2885 1000 0.0672 95.9 95.9 124.48

Framework versions

  • PEFT 0.17.0
  • Transformers 4.55.2
  • Pytorch 2.6.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
63
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cjkasbdkjnlakb/agent-1013

Adapter
(69)
this model