TimeMaster: Training Time-Series Multimodal LLMs to Reason via Reinforcement Learning

arXiv Paper   GitHub Project   HuggingFace Models

TimeMaster is a reinforcement‑learning‑enhanced framework for training time‑Series multimodal large language models (MLLMs). It enables structured, interpretable reasoning over visualized time‑series signals and has been evaluated on real‑world tasks such as EMG, ECG and Human Activity Recognition (HAR) using Qwen2.5‑VL‑3B‑Instruct.

How To Use

This model is ONLY used as the cold-start SFT model for RL post training of TimeMaster on the CTU dataset. To use this model, please refer to TimeMaster's README.md.

Downloads last month
18
Safetensors
Model size
3.75B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for langfeng01/TimeMaster-SFT-Qwen2.5-VL-3B-CTU

Finetuned
(253)
this model
Quantizations
2 models

Collection including langfeng01/TimeMaster-SFT-Qwen2.5-VL-3B-CTU