This repository contains a Buffet Agent model for a demo use case of FinLoRA: Benchmarking LoRA Methods for Fine-Tuning LLMs on Financial Datasets.

Built with Axolotl

See axolotl config

axolotl version: 0.10.0

base_model: meta-llama/Llama-3.1-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
gradient_accumulation_steps: 2
micro_batch_size: 8
num_epochs: 4
learning_rate: 0.0001
optimizer: adamw_torch_fused
lr_scheduler: cosine
load_in_8bit: false
load_in_4bit: false
adapter: lora
lora_r: 64
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
  - q_proj
  - k_proj
  - v_proj
val_set_size: 0.02
output_dir: /workspace/FinLoRA/lora/axolotl-output/buffett_agent_llama_3_1_8b_8bits_r64_rslora
sequence_len: 4096
gradient_checkpointing: true
logging_steps: 500
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
  pad_token: <|end_of_text|>
deepspeed: deepspeed_configs/zero1.json
bf16: auto
tf32: false
chat_template: llama3
wandb_name: buffett_agent_llama_3_1_8b_8bits_r64_rslora

workspace/FinLoRA/lora/axolotl-output/buffett_agent_llama_3_1_8b_8bits_r64_rslora

This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the /workspace/FinLoRA/data/train/warren_buffett_train.jsonl dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4060

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • total_eval_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • training_steps: 1225

Training results

Training Loss Epoch Step Validation Loss
No log 0 0 2.1664
No log 0.2512 77 1.5421
No log 0.5024 154 1.4849
No log 0.7537 231 1.4618
No log 1.0033 308 1.4463
No log 1.2545 385 1.4391
No log 1.5057 462 1.4351
1.4756 1.7569 539 1.4292
1.4756 2.0065 616 1.4233
1.4756 2.2577 693 1.4196
1.4756 2.5090 770 1.4142
1.4756 2.7602 847 1.4103
1.4756 3.0098 924 1.4092
1.3755 3.2610 1001 1.4073
1.3755 3.5122 1078 1.4062
1.3755 3.7635 1155 1.4060

Framework versions

  • PEFT 0.15.2
  • Transformers 4.52.3
  • Pytorch 2.8.0.dev20250319+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
42
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ghostof0days/buffett_agent_llama_3_1_8b_8bits_r64_rslora

Adapter
(922)
this model